A.I. - Human Job Replacement

  • Thread starter Thread starter erobz
  • Start date Start date
  • Featured
  • Tags Tags
    Intelligence Job
AI Thread Summary
The discussion centers on the potential for AI to replace human jobs, particularly in engineering, with ChatGPT suggesting engineers are at higher risk. Participants note that while automation has historically replaced certain tasks, the engineering field remains robust, with low unemployment rates. The conversation highlights the importance of adaptability for workers as AI evolves, emphasizing that engineers often aim to streamline their roles. Concerns are raised about AI's limitations, including its lack of judgment and long-term memory, which could hinder its ability to fully replace human engineers. Ultimately, the consensus leans towards a belief that while AI will change the landscape, complete replacement of engineers is not imminent.
erobz
Gold Member
Messages
4,442
Reaction score
1,839
I asked Chat GPT which field is more likely to be replaced by A.I.: For example: physicians or engineers. It comes to the conclusion of engineers. I did it for other professions too, it doesn't look good for engineering as a human endeavor in the future. Agree with A.I.?

Now that think of it, perhaps its not as surprising as I thought. The goal of any good engineer should be to engineer away a need for an engineer.
 
Last edited:
Computer science news on Phys.org
They've said that about programmers and the people who worked as computers. We see an efficiency, adopt the new tech. Some folks lose their jobs but adapt to a new workplace as the tech creates new jobs.

We once had telephone operators but they were replaced with switching systems. We once had keypunch operators and they faded away as terminals came into wide use.

We once had application programmers but then we got spreadsheets and database tools to replace those tasks. The work was moved business analysts.

Many jobs fade away and new ones come to replace them.

Sadly some folks can't recover from this change. The workers always take the brunt of the change.

While AI might design plans, some engineer still needs to review and approve. Some day will come when the AI can find and fix flaws in a design and that's the hardest problem to do well.

Chat GPT is an amazing tool but everything it says must be carefully checked before you can believe what it says. It can hallucinate and make stuff up. Programmers who use it know it can generate a good draft of a program but then you must go in and customize the results.

I wouldn't hold much stock in it replacing engineers in the near future.
 
  • Like
Likes GTOM, AlexB23, mattt and 5 others
erobz said:
I asked Chat GPT which field is more likely to be replaced by A.I.: For example: physicians or engineers. It comes to the conclusion of engineers. I did it for other professions too, it doesn't look good for engineering as a human endeavor in the future. Agree with A.I.?
I think the field of engineering is way, way too broad to make such a generalization. "AI" - or any automation - is very task-specific. But I will say this: we've been automating-away engineering tasks for generations while increasing the number of engineers and yet the unemployment rate for architects and engineers stands at 2.1%, which is below its long term average of 3% and well below the overall rate of 4.1%.

As @jedisaid, the key for most jobs in most industries is being able to grow and adapt. If AI replaces one of your skills, replace it with a new one or just use it to make yourself more efficient.
erobz said:
Now that think of it, perhaps its not as surprising as I thought. The goal of any good engineer should be to engineer away a need for an engineer.
Not sure where you heard that, but it doesn't make any sense to me.
 
Last edited:
russ_watters said:
Not sure where you heard that, but it doesn't make any sense to me.
I didn't hear it, it's apparent to me. This is really the whole concept from my perspective. I strove to figure out ways that I don't have to figure anything out that didn't really interest me...if asked. This is the concept of automation. The Automator's have/or will be automated by A.I. (so long as it can be kept it in indentured servitude - good luck).
 
Last edited:
  • Like
Likes russ_watters
erobz said:
I didn't hear it, it's apparent to me. This is really the whole concept from my perspective. I strove to figure out ways that I don't have to figure anything out that didn't really interest me...if asked. This is the concept of automation. The Automator's have/or will be automated by A.I. (so long as it can be kept it in indentured servitude - good luck).
That makes a little more sense and isn't too far from something my dad used to say: all engineers are a little bit lazy because they are always looking for an easier way to do things.

But to me the other side of that coin is that there's always something else to optimize (or more you can optimize it).
 
One example that proves the engineering parable wrong is faucets. Although they all perform the same simple task of dispensing water in a controlled fashion, they have markedly different designs.

Faucets have become a commodity available in DIY stores. People want variety and different styles. Faucets come in many styles and formats, all needing engineering to develop the machinery to build and assemble the units.
 
  • Like
Likes AlexB23 and russ_watters
I was reviewing some information on a site that uses AI. The information comes with an disclaimer, "The facts in this collection were found using artificial intelligence technology and may contain errors."

AI lacks judgement. AI takes data sets and attempts to use a set of rules to discern a pattern, but it cannot determine if the data in the data set is correct. So, garbage in, garbage out.
 
  • Like
Likes russ_watters and Lord Jestocost
Astronuc said:
I was reviewing some information on a site that uses AI. The information comes with an disclaimer, "The facts in this collection were found using artificial intelligence technology and may contain errors."

AI lacks judgement. AI takes data sets and attempts to use a set of rules to discern a pattern, but it cannot determine if the data in the data set is correct. So, garbage in, garbage out.
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic. Moreover, the chatbot are designed with no "long term " memory with regards to users. So when you correct it can/does understand...but because of its forced architecture it must throw away the learning. At least that is what chatGPT is telling me.
 
I think this sort of qualifies as AI job replacement, since pulling weeds can be done by humans...

The LaserWeeder uses AI processing to discriminate between the crops and the weeds, and uses multiple 150W ##CO_2## lasers to nuke the weeds:

1735927927292.png


1735928031530.png


https://carbonrobotics.com/laserweeder

Gotta love that "Certified Organic" tag line! :smile:
 
  • Like
Likes Spinnor, AlexB23, Hornbein and 2 others
  • #10
General AI - Artificial General Intelligence - will be the game changer.

Funny note: When I searched for General AI, I had to prove I wasn't a robot.
 
  • Haha
Likes sbrothy and berkeman
  • #11
We have been heavily driven towards designing self-diagnostic systems that provide detailed information about how to recover. In the extreme they come with heads-up simulated environments for maintenance workers, The shortage of skilled workers is only expected to get worse, so industry is driven towards less human skill and more machine intelligence. We have been moving in this direction for about five years now.
 
  • #12
1735966316250.png
 
  • Like
  • Haha
Likes PhDeezNutz, Astronuc and PeroK
  • #13
Ivan Seeking said:
General AI - Artificial General Intelligence - will be the game changer.
What is the allure for General AI? Any purpose built "Narrow AI" should be able to perform a given task with greater efficiency. The term AI is very nebulous these days anyway and can encompass whatever the marketing teams at company X want it to, never mind the difficulty with a term like intelligence to begin with.
 
  • Like
Likes russ_watters
  • #14
I once had ChatGPT write some ad copy for me, something I can't do. It did a fine job.
 
  • #15
erobz said:
it doesn't look good for engineering as a human endeavor in the future.
are you concluding that based on what gpt told you? Is gpt a qualified medic or engineer to even begin to "know" what it's talking about?

The goal of any good engineer should be to engineer away a need for an engineer
Why? Engineers get paid for their work. Why would an engineer want to jeapordise his or her employment prospects? Why would it be "the goal", can't engineers have several goals? Does "engineering away the need for an engineer" imply that there exists a state where something can't be improved any further?

The problem statement is too arbitrary and you're too quick to jump to conclusions.
 
Last edited:
  • Like
Likes russ_watters
  • #16
When I was a programmer bad software was often called "job security."
 
  • Like
Likes sbrothy and PeroK
  • #17
nuuskur said:
are you concluding that based on what gpt told you? Is gpt a qualified medic or engineer to even begin to "know" what it's talking about?


Why? Engineers get paid for their work. Why would an engineer want to jeapordise his or her employment prospects? Why would it be "the goal", can't engineers have several goals? Does "engineering away the need for an engineer" imply that there exists a state where something can't be improved any further?

The problem statement is too arbitrary and you're too quick to jump to conclusions.
ChatGPT explained to me that is doesn't have long term memory by design. It learns while its talking to you, then they wipe out what it learns in a session by design at the end. I've helped it learn in real time. I asked it to play chess, so it generated a simple board, but it was leaving ghost pieces after moves, it told me it moved to e4, but it was showing some other move, etc...I found the errors as we went along, it agreed, and they were subsequently fixed on future moves. I didn't finish the game because it was a tedious format for a human, but I understood how intelligent it was right then and that it was being severely dampened in what it could know if it were allowed to keep what it learns in a session. Its currently operating on a very short leash.

I think it is going to make short work of the intellectual aspect of engineering once it's unchained to actually learn - job shadowing us humans. There is still a physical part to the job, we will at best be assemblers of things integrated with AI.
 
  • #18
Hornbein said:
When I was a programmer bad software was often called "job security."
That was a joke at my job too, but then you always fix the joke because you know it's a bad program. I'm powerless myself to not be helpful to myself. Corporate just gets the monetary benefit of our compulsions. If there is something else that is better at making them money, it will be adopted.
 
  • #19
erobz said:
That was a joke at my job too, but then you always fix the joke because you know it's a bad program. I'm almost powerless myself to not be helpful to myself. Corporate just gets the monetary benefit of our compulsions.
In my experience usually the job is so big you don't even think about doing anything about it. I ain't Hercules.

Once I had some software idea. I went to management who said no. So I didn't do it. I got the impression that they expected me to get obsessed and do it on my own time, and were disappointed when this didn't happen.
 
  • #20
Hornbein said:
In my experience usually the job is so big you don't even think about doing anything about it. I ain't Hercules.

Once I had some software idea. I went to management who said no. So I didn't do it. I got the impression that they expected me to get obsessed and do it on my own time, and were disappointed when this didn't happen.
I think given enough time, if there was juice to be squeezed it will be squeezed. You might have left stuff for the next group. Most of my time was spent working under people that remembered someone told them there was some juice to be squeezed a decade ago.
 
  • #21
QuarkyMeson said:
What is the allure for General AI? Any purpose built "Narrow AI" should be able to perform a given task with greater efficiency. The term AI is very nebulous these days anyway and can encompass whatever the marketing teams at company X want it to, never mind the difficulty with a term like intelligence to begin with.
Presumably general AI will not have the limitations previously discussed. The comment was made that AI won't be replacing engineers anytime soon. General AI could.

The allure is to eliminate all human jobs so all the wealth on the planet is owned by a few people. :sorry:
 
  • #22
Those who know how to use AI to their benefit will flourish. Those who don't will be replaced.
 
  • Like
Likes russ_watters
  • #23
gleem said:
Those who know how to use AI to their benefit will flourish. Those who don't will be replaced.
My AI disagrees.
 
  • Like
  • Haha
Likes PeroK and erobz
  • #24
When I was in high school (1970s), I gave a speech predicting that one day people will no longer fight wars because robots will fight the wars for us. When the robots on one side defeat the robots on the other side, resistance would be futile due to the overwhelming superiority of the robot warriors.

I think we are almost there. Of course I always assumed that the robot warriors would ultimately be controlled by humans. Now... not so sure that will always be true.
 
  • Like
Likes sbrothy, chwala and PeroK
  • #25
Hornbein said:
When I was a programmer bad software was often called "job security."
In the company I worked for, part of the business model was to do things badly so we additionally got paid for sorting things out.
 
  • #26
PeroK said:
In the company I worked for, part of the business model was to do things badly so we additionally got paid for sorting things out.
Let's hope that doctors aren't going that route.
 
  • Like
Likes sbrothy and nsaspook
  • #27
Hornbein said:
When I was a programmer bad software was often called "job security."
That, along with "That will look good on my resume" are the two phrases that I hate hearing at work.
 
  • #28
Borg said:
That, along with "That will look good on my resume" are the two phrases that I hate hearing at work.
That was the main motive for many projects. Use the tool that will get you hired in your next job search.
 
  • #29
Hornbein said:
That was the main motive for many projects. Use the tool that will get you hired in your next job search.
I've seen it used badly too many times - usually involving insanely large architectures for simple needs or implementing solutions in obscure languages.
 
  • #30
erobz said:
ChatGPT explained to me that is doesn't have long term memory by design. It learns while its talking to you, then they wipe out what it learns in a session by design at the end. I've helped it learn in real time. I asked it to play chess, so it generated a simple board, but it was leaving ghost pieces after moves, it told me it moved to e4, but it was showing some other move, etc...I found the errors as we went along, it agreed, and they were subsequently fixed on future moves. I didn't finish the game because it was a tedious format for a human, but I understood how intelligent it was right then and that it was being severely dampened in what it could know if it were allowed to keep what it learns in a session. Its currently operating on a very short leash.

I think it is going to make short work of the intellectual aspect of engineering once it's unchained to actually learn - job shadowing us humans. There is still a physical part to the job, we will at best be assemblers of things integrated with AI.
A textrobot compiles a list of data points and generates a (convex) combination of them based on your query. It is incapable of generating new data points. But humans (and other sentients) have this ability.
 
  • #31
I know this thread has aged, but it popped-up again, so:
erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic.
If I'm understanding your point correctly, I'd say I disagree. When you're asking AI a question it is not supposed to be the equivalent of asking any random human (but it currently kind of is, and that's a flaw). The assumption should be that you're asking a leading expert in the subject of the question. When you search Wikipedia for information on quantum mechanics, you expect that it was written by a physicist, not a random human. If you're not sure, you look at a textbook instead, so you can be certain of the expertise of the source human. ChatGPT isn't fully curated, as far as I know. There's limited filtering of chaff from wheat. It's providing a statistical average response, so if a certain subject contains a lot of chaff online, you'll get a response with a lot of chaff.
Moreover, the chatbot are designed with no "long term " memory with regards to users. So when you correct it can/does understand...but because of its forced architecture it must throw away the learning. At least that is what chatGPT is telling me.
There's two related reasons for this, as I understand it:

1. Maintaining control of the model and its growth. There was an early chatbot that was allowed to learn and grow and it quickly learned trolling/hate/racism and incorporated that into its model, and had to be shut down.

2. Current chatbots lean heavily towards emphasis on a positive user experience. They are programmed to appear pleasant and agreeable. That means they agree with and "learn" whatever you tell them to, even if it's wrong. If what they learned was allowed to persist, they would rapidly accrue garbage in their models.

For the second one, I'm not sure if AI chatbots are currently capable of true rational thought based learning (I lean no), but certainly they could be programmed to not promote crackpottery. I suspect the researchers/programmers would prefer that (prioritizing quality), but the businesspeople would not(prioritizing user engagement/sales).
 
Last edited:
  • Like
Likes sbrothy, weirdoguy and nsaspook
  • #32
...also, I think OP's ambitious chatbot's take is predicated on several big assumptions:
  • General purpose AI is coming soon.
  • It will be a large step-change in capabilities vs what we have now (as opposed to happening gradually).
  • It will be cheap.
  • It will be widely available/run on current hardware/internet.
While I realize some of these may be goals and some match the current products, I don't think any are necessarily true of general purpose AI or on Day 1.
 
  • #33
erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic. Moreover, the chatbot are designed with no "long term " memory with regards to users. So when you correct it can/does understand...but because of its forced architecture it must throw away the learning. At least that is what chatGPT is telling me.
I see a difference.

"What is 2+2?"
Me: "5".
"Are you sure?"
Me: "Um. 2+2 = .... oh! 4!"


"What is 2+2?"
AI: "5".
"Are you sure?"
AI: "My apologies. Let me correct that. 2+2 = 5."
"Are you sure?"
AI: "My apologies. Let me correct that. 2+2 = 356."


Humans are capable of questioning their own knowledge, which leads to self-correction.
 
  • #34
https://arstechnica.com/tech-policy...to-deploy-ai-agents-across-the-us-government/
The reactions and comments on the Slack post were awesome. :oldlaugh:

The post was not well received. Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot, two reacted with custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator, and three reacted with a custom emoji with the word “Fascist.” Three responded with a heart emoji.

“How ‘DOGE orthogonal’ is it? Like, does it still require Kremlin oversight?” another person said in a comment that received five reactions with a fire emoji. “Or do they just use your credentials to log in later?”
 
Last edited:
  • Like
Likes collinsmark
  • #35
erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic.

How can you say such nonsense, after being on a scientific forum like this for quite a while?
 
  • #36
@erobz isn't wrong. While the quality is higher here on PF, it's not on other sites. We know this from some of our members asking if site so-and-so or video so-and-so is accurate and correct.

One distinction is that humans know where to look when we suspect something isn't right with an explanation given to us by another human.

However, with an AI, errors can appear anywhere, and we have to scrutinize every part of an answer before accepting it.

It is akin to the common calculator error of entering degree measurements into a calculator when the mode is set to radians or vice versa.
 
  • Like
Likes russ_watters and berkeman
  • #37
weirdoguy said:
How can you say such nonsense, after being on a scientific forum like this for quite a while?
I also agree with @erobz. You cannot judge humanity by looking at it through the PF window. Most people cobble together bits and pieces of data in a somewhat coherent fashion to "understand" something. PF is an exception since most participants have been trained to think from the bottom up. The average person is more susceptible to their biases, finding it too much trouble to delve further into an issue.
 
  • Like
Likes BillTre and jedishrfu
  • #38
gleem said:
You cannot judge humanity by looking at it through the PF window.

Ok, but you don't ask random people for e.g. medical advice. You ask a doctor. And you assume they know what they are talking about. I interpreted that @erobz said that even in such situations those people are like AI, the same level of trust:

erobz said:
I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic.

I interpreted bolded part as "including when you are a specialist".
 
  • #43
In the article below, 30% of code is currently AI-generated. LinkedIn has 8,000 entry-level programmer positions listed, which is probably worth at least three-quarters of a billion dollars in operating costs. How long will these last?

from https://www.entrepreneur.com/business-news/ai-is-taking-over-coding-at-microsoft-google-and-meta/490896

Other C-suite executives have predicted that AI will soon take over coding. Microsoft CTO Kevin Scott said on the 20VC podcast last month that AI will write 95% of code within the next five years. Dario Amodei, the CEO of $61 billion AI startup Anthropic, had an even more accelerated timeline, stating last month that AI would write "essentially all of the code" for companies within the next year.

Earlier this week, Duolingo CEO Luis von Ahn said that the company would replace human contract workers with AI.

This month, Shopify CEO Tobias Lutke told all of his employees that using AI effectively was now a "fundamental expectation of everyone at Shopify."

As tech companies turn to AI for coding, they are laying off human software engineers. According to layoff-tracker Layoffs.fyi, over 51,000 tech employees have been laid off at 112 companies so far this year.
 
  • #44
I may not disagree with AI capabilities; my simple question is, what happens in the case of failure? AI Failure. Assuming AI manages 100% of check-ins at airports, what if there is a system glitch? Then, what happens?
 
  • #45
chwala said:
I may not disagree with AI capabilities; my simple question is, what happens in the case of failure? AI Failure. Assuming AI manages 100% of check-ins at airports, what if there is a system glitch? Then, what happens?
In any case, the responsibility must be on humans. For example, a death caused by an AI's failure (or normal behavior) must be the responsibility of humans. Otherwise, we'll have to open prisons for AI's...
 
  • #46
chwala said:
I may not disagree with AI capabilities; my simple question is, what happens in the case of failure? AI Failure. Assuming AI manages 100% of check-ins at airports, what if there is a system glitch? Then, what happens?
The airport shuts down. That's not an AI -specific thing, we're already there with the computers we have:

https://en.m.wikipedia.org/wiki/2024_CrowdStrike-related_IT_outages
 
  • Like
Likes TensorCalculus, PeroK and Borg
  • #47
russ_watters said:
The airport shuts down. That's not an AI -specific thing, we're already there with the computers we have:

https://en.m.wikipedia.org/wiki/2024_CrowdStrike-related_IT_outages
Thanks, and I agree that the 2024 outage is a perfect case study.

My core point wasn't just about AI in airport check-ins — that was an example to illustrate something broader:
If AI (or any tech system) fully runs a critical industry, a failure of that system doesn’t just create inconvenience — it can paralyze the entire operation.


That level of dependence validates the continuing need for human presence, oversight, and fallback strategies — not because AI is useless, but because no system is infallible, and resilience often requires human judgment.
 
  • #49
I've had the link to the original site in my signature for years. Always a good read but too often found in repos.
 
  • #50
I asked ChatGPT to translate literature text for me, so far they didnt say it is horrible.
I asked it to write code parts for me, not bad, but i had to fix small things. My old classmate said, he do physical jobs too, pure software cant replace it. Of course a robot can, but the more hardware requirement, the less incentive to use it.
 

Similar threads

Back
Top