A.I. - Human Job Replacement

  • Thread starter Thread starter erobz
  • Start date Start date
  • Featured
  • Tags Tags
    Intelligence Job
Click For Summary
The discussion centers on the potential for AI to replace human jobs, particularly in engineering, with ChatGPT suggesting engineers are at higher risk. Participants note that while automation has historically replaced certain tasks, the engineering field remains robust, with low unemployment rates. The conversation highlights the importance of adaptability for workers as AI evolves, emphasizing that engineers often aim to streamline their roles. Concerns are raised about AI's limitations, including its lack of judgment and long-term memory, which could hinder its ability to fully replace human engineers. Ultimately, the consensus leans towards a belief that while AI will change the landscape, complete replacement of engineers is not imminent.
  • #31
I know this thread has aged, but it popped-up again, so:
erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic.
If I'm understanding your point correctly, I'd say I disagree. When you're asking AI a question it is not supposed to be the equivalent of asking any random human (but it currently kind of is, and that's a flaw). The assumption should be that you're asking a leading expert in the subject of the question. When you search Wikipedia for information on quantum mechanics, you expect that it was written by a physicist, not a random human. If you're not sure, you look at a textbook instead, so you can be certain of the expertise of the source human. ChatGPT isn't fully curated, as far as I know. There's limited filtering of chaff from wheat. It's providing a statistical average response, so if a certain subject contains a lot of chaff online, you'll get a response with a lot of chaff.
Moreover, the chatbot are designed with no "long term " memory with regards to users. So when you correct it can/does understand...but because of its forced architecture it must throw away the learning. At least that is what chatGPT is telling me.
There's two related reasons for this, as I understand it:

1. Maintaining control of the model and its growth. There was an early chatbot that was allowed to learn and grow and it quickly learned trolling/hate/racism and incorporated that into its model, and had to be shut down.

2. Current chatbots lean heavily towards emphasis on a positive user experience. They are programmed to appear pleasant and agreeable. That means they agree with and "learn" whatever you tell them to, even if it's wrong. If what they learned was allowed to persist, they would rapidly accrue garbage in their models.

For the second one, I'm not sure if AI chatbots are currently capable of true rational thought based learning (I lean no), but certainly they could be programmed to not promote crackpottery. I suspect the researchers/programmers would prefer that (prioritizing quality), but the businesspeople would not(prioritizing user engagement/sales).
 
Last edited:
  • Like
Likes sbrothy, weirdoguy and nsaspook
Computer science news on Phys.org
  • #32
...also, I think OP's ambitious chatbot's take is predicated on several big assumptions:
  • General purpose AI is coming soon.
  • It will be a large step-change in capabilities vs what we have now (as opposed to happening gradually).
  • It will be cheap.
  • It will be widely available/run on current hardware/internet.
While I realize some of these may be goals and some match the current products, I don't think any are necessarily true of general purpose AI or on Day 1.
 
  • #33
erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic. Moreover, the chatbot are designed with no "long term " memory with regards to users. So when you correct it can/does understand...but because of its forced architecture it must throw away the learning. At least that is what chatGPT is telling me.
I see a difference.

"What is 2+2?"
Me: "5".
"Are you sure?"
Me: "Um. 2+2 = .... oh! 4!"


"What is 2+2?"
AI: "5".
"Are you sure?"
AI: "My apologies. Let me correct that. 2+2 = 5."
"Are you sure?"
AI: "My apologies. Let me correct that. 2+2 = 356."


Humans are capable of questioning their own knowledge, which leads to self-correction.
 
  • #34
https://arstechnica.com/tech-policy...to-deploy-ai-agents-across-the-us-government/
The reactions and comments on the Slack post were awesome. :oldlaugh:

The post was not well received. Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot, two reacted with custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator, and three reacted with a custom emoji with the word “Fascist.” Three responded with a heart emoji.

“How ‘DOGE orthogonal’ is it? Like, does it still require Kremlin oversight?” another person said in a comment that received five reactions with a fire emoji. “Or do they just use your credentials to log in later?”
 
Last edited:
  • Like
Likes collinsmark
  • #35
erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic.

How can you say such nonsense, after being on a scientific forum like this for quite a while?
 
  • #36
@erobz isn't wrong. While the quality is higher here on PF, it's not on other sites. We know this from some of our members asking if site so-and-so or video so-and-so is accurate and correct.

One distinction is that humans know where to look when we suspect something isn't right with an explanation given to us by another human.

However, with an AI, errors can appear anywhere, and we have to scrutinize every part of an answer before accepting it.

It is akin to the common calculator error of entering degree measurements into a calculator when the mode is set to radians or vice versa.
 
  • Like
Likes russ_watters and berkeman
  • #37
weirdoguy said:
How can you say such nonsense, after being on a scientific forum like this for quite a while?
I also agree with @erobz. You cannot judge humanity by looking at it through the PF window. Most people cobble together bits and pieces of data in a somewhat coherent fashion to "understand" something. PF is an exception since most participants have been trained to think from the bottom up. The average person is more susceptible to their biases, finding it too much trouble to delve further into an issue.
 
  • Like
Likes BillTre and jedishrfu
  • #38
gleem said:
You cannot judge humanity by looking at it through the PF window.

Ok, but you don't ask random people for e.g. medical advice. You ask a doctor. And you assume they know what they are talking about. I interpreted that @erobz said that even in such situations those people are like AI, the same level of trust:

erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic.

I interpreted bolded part as "including when you are a specialist".
 
  • #43
In the article below, 30% of code is currently AI-generated. LinkedIn has 8,000 entry-level programmer positions listed, which is probably worth at least three-quarters of a billion dollars in operating costs. How long will these last?

from https://www.entrepreneur.com/business-news/ai-is-taking-over-coding-at-microsoft-google-and-meta/490896

Other C-suite executives have predicted that AI will soon take over coding. Microsoft CTO Kevin Scott said on the 20VC podcast last month that AI will write 95% of code within the next five years. Dario Amodei, the CEO of $61 billion AI startup Anthropic, had an even more accelerated timeline, stating last month that AI would write "essentially all of the code" for companies within the next year.

Earlier this week, Duolingo CEO Luis von Ahn said that the company would replace human contract workers with AI.

This month, Shopify CEO Tobias Lutke told all of his employees that using AI effectively was now a "fundamental expectation of everyone at Shopify."

As tech companies turn to AI for coding, they are laying off human software engineers. According to layoff-tracker Layoffs.fyi, over 51,000 tech employees have been laid off at 112 companies so far this year.
 
  • #44
I may not disagree with AI capabilities; my simple question is, what happens in the case of failure? AI Failure. Assuming AI manages 100% of check-ins at airports, what if there is a system glitch? Then, what happens?
 
  • #45
chwala said:
I may not disagree with AI capabilities; my simple question is, what happens in the case of failure? AI Failure. Assuming AI manages 100% of check-ins at airports, what if there is a system glitch? Then, what happens?
In any case, the responsibility must be on humans. For example, a death caused by an AI's failure (or normal behavior) must be the responsibility of humans. Otherwise, we'll have to open prisons for AI's...
 
  • #46
chwala said:
I may not disagree with AI capabilities; my simple question is, what happens in the case of failure? AI Failure. Assuming AI manages 100% of check-ins at airports, what if there is a system glitch? Then, what happens?
The airport shuts down. That's not an AI -specific thing, we're already there with the computers we have:

https://en.m.wikipedia.org/wiki/2024_CrowdStrike-related_IT_outages
 
  • Like
Likes TensorCalculus, PeroK and Borg
  • #47
russ_watters said:
The airport shuts down. That's not an AI -specific thing, we're already there with the computers we have:

https://en.m.wikipedia.org/wiki/2024_CrowdStrike-related_IT_outages
Thanks, and I agree that the 2024 outage is a perfect case study.

My core point wasn't just about AI in airport check-ins — that was an example to illustrate something broader:
If AI (or any tech system) fully runs a critical industry, a failure of that system doesn’t just create inconvenience — it can paralyze the entire operation.


That level of dependence validates the continuing need for human presence, oversight, and fallback strategies — not because AI is useless, but because no system is infallible, and resilience often requires human judgment.
 
  • #49
I've had the link to the original site in my signature for years. Always a good read but too often found in repos.
 
  • #50
I asked ChatGPT to translate literature text for me, so far they didnt say it is horrible.
I asked it to write code parts for me, not bad, but i had to fix small things. My old classmate said, he do physical jobs too, pure software cant replace it. Of course a robot can, but the more hardware requirement, the less incentive to use it.
 
  • #51

MIT report: 95% of generative AI pilots at companies are failing​

https://www.yahoo.com/finance/news/mit-report-95-generative-ai-105412686.html

Good morning. Companies are betting on AI—yet nearly all enterprise pilots are stuck at the starting line.

The GenAI Divide: State of AI in Business 2025, a new report published by MIT’s NANDA initiative, reveals that while generative AI holds promise for enterprises, most initiatives to drive rapid revenue growth are falling flat.

Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.

This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says he’d do it again​

https://fortune.com/2025/08/17/ceo-laid-off-80-percent-workforce-ai-sabotage/

Eric Vaughan, CEO of enterprise-software powerhouse IgniteTech, is unwavering as he reflects on the most radical decision of his decades-long career. In early 2023, convinced that generative AI was an “existential” transformation, Vaughan looked at his team and saw a workforce not fully on board. His ultimate response: He ripped the company down to the studs, replacing nearly 80% of staff within a year, according to headcount figures reviewed by Fortune.
 
  • Wow
  • Informative
  • Like
Likes TensorCalculus, russ_watters, GTOM and 1 other person
  • #52
Goes to show how out of touch management can be with the actual workings on the "factory floor". As long as modern buzzwords (or "buzz-concepts") are used stocks may go up, but the actual productivity (or in this case the lifes of random people) suffers.
 
  • #53
sbrothy said:
Best repo I have ever starred in my life
AI is already replacing some software engineering roles... and to be honest you can't half blame the companies: why pay someone $100k what you could do with just $10 of AI credits? My parents showed me a new AI tool just today. It wrote an entire app, with a fully functioning backend with sign in and auth as well as full functionality for what we wanted the app to do (which, was not simple, and included mucking about with APIs, SQLlite... and more). It designed a beautiful react frontend and coded the entire project, we literally just had to sit there and watch. It cost just over $9 worth of Claude API credits. They said (and they are engineers), that this sort of project... you would have to pay an engineer for at least a month to get it going. We were just giving it a shot for fun but it was crazy.

Oh yeah, and they are also practically mandated to use AI in their jobs by their companies, as engineers. Companies are keen to get people to adopt this stuff.
 
  • Like
Likes PeroK, sbrothy and erobz
  • #54
russ_watters said:
[…] There was an early chatbot that was allowed to learn and grow and it quickly learned trolling/hate/racism and incorporated that into its model, and had to be shut down.

[…]

I vividly remember that one! It was ugly and embarrassing, but somehow only surprising to it’s creators.
 
  • Like
Likes TensorCalculus, russ_watters and weirdoguy
  • #55
sbrothy said:
I vividly remember that one! It was ugly and embarrassing, but somehow only surprising to it’s creators.
Yeah, it's baffling that they didn't see it coming or even test for it.
 
  • #56
Astronuc said:

MIT report: 95% of generative AI pilots at companies are failing​

https://www.yahoo.com/finance/news/mit-report-95-generative-ai-105412686.html



This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says he’d do it again​

https://fortune.com/2025/08/17/ceo-laid-off-80-percent-workforce-ai-sabotage/
he discovered after training literally thousands of people that “most people hate learning. They’d avoid it if they can.”

I like learning with an exception of most software packages. It can be hard to do the simplest things. Blender (the big animated graphics package) is a nightmare as far as I'm concerned.
 
  • #57
russ_watters said:
Yeah, it's baffling that they didn't see it coming or even test for it.
Indeed. I suspect that just banning it from the various "chan" sites, "ED" and perhaps "reddit" would have been enough. Maybe allowing it only google as a search engine.

Nah, it's probably pervasive throughout the internet.
 
  • #58
Goodwill CEO says he’s preparing for an influx of jobless Gen Zers because of AI—and warns, a youth unemployment crisis is already happening

https://www.yahoo.com/news/articles/goodwill-ceo-says-preparing-influx-090000273.html

Tech leaders have been quick to squash claims that their AI firms could one day cause significant unemployment. But Goodwill’s CEO Steve Preston says it’s already happening.

The charity, which has over 650 job centers, saw over 2 million people use its employment services last year—and it’s getting ready for even more.

“We are preparing for a flux of unemployed young people—as well as other people—from AI,” the CEO exclusively told Fortune, adding that automation will hit low-wage and entry-level roles the worst.
. . . .
 
  • Informative
  • Sad
Likes Gavran, PeroK and Greg Bernhardt

Similar threads

  • · Replies 18 ·
Replies
18
Views
6K
Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 39 ·
2
Replies
39
Views
6K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K