A Crisis for Newly Minted CompSci Majors -- entry level jobs gone

  • Thread starter Thread starter jedishrfu
  • Start date Start date
AI Thread Summary
Fresh computer science graduates are facing a significant employment crisis, with unemployment rates between 6.1% and 7.5%, far exceeding those of other majors. The rise of AI has led to the automation of many entry-level coding jobs, while high-paying positions in machine learning are increasingly reserved for more experienced candidates. The perceived value of computer science degrees is declining, as some successful individuals are finding lucrative opportunities without formal degrees. Experts suggest that pursuing degrees in physical sciences may be more advantageous in the current job market. The demand for software testing is expected to grow due to the complexities introduced by AI, highlighting the need for adaptability in the tech industry.
Messages
15,437
Reaction score
10,135
https://techcrunch.com/2025/08/10/the-computer-science-dream-has-become-a-nightmare

The computer science dream has become a nightmare:

The coding-equals-prosperity promise has officially collapsed.

Fresh computer science graduates are facing unemployment rates of 6.1% to 7.5% — more than double what biology and art history majors are experiencing, according to a recent Federal Reserve Bank of New York study. A crushing New York Times piece highlights what’s happening on the ground.
 
  • Informative
  • Like
Likes Yael129, TensorCalculus and BvU
Technology news on Phys.org
jedishrfu said:
Both my parents are software engineers and I have many friends who aspire to be software engineers: so I hear about this a lot. It is a bit of a nightmare...
Even my parents use AI all the time nowadays: in fact AI use is sort of mandated by their companies.
AI has been taking all of the entry-level jobs: but it's also been making new jobs. Jobs in ML can be super high paying and are becoming more and more abundant as AI becomes more widespread. So... maybe all is not lost. (But those jobs will likely go to those more experienced with coding and not people fresh out of Uni now I think about it)
The value of CS degrees is also dropping. I know a kid, who was honestly really, really brilliant, who dropped out of Imperial College and is now building a business. He also got himself some job offers: as a 19 year old with no degree, and was able to get some really high-paying ones, jobs that even CS graduates struggle to get.

On the bright side: people say that physics/maths degrees are the ones to pursue rather than CS in this age: NVIDIA's CEO Jensen Huang said that if he had to pick a degree to study in this new age, it would be physical sciences and not software sciences.
 
  • Like
  • Agree
Likes Astronuc, berkeman, russ_watters and 2 others
Yes, it's sad that this has happened. Looking over my 45 years as a software engineer and having played with AI, I can understand why companies are doing this.

In the 1970s, there were keypunch operators responsible for handling the workload of inputting data into the computer and transcribing what programmers wrote on coding sheets. Then the TTY was introduced, and they experienced a reduction in their workload when programmers began using terminals and modems. As technology advanced, data was transferred electronically, significantly decreasing their data input load.

Also in the 1970s, nearly every manager had a secretary, and then a few years later, secretaries began handling multiple managers as the need to type letters and documentation decreased.

Some executive bosses would act as lazy editors, requiring the secretary to keep revising their letters. Secretaries were not happy campers when that happened.

Then the Wang word processor came out, and secretaries were elated because they didn't have to retype a letter to fix a mistake. It looked bad if there was even one mistake in correspondence with a client.

But the Wang revolution was beginning a new wave of change.

The PC revolution came and allowed managers to type their letters, and secretarial work was reduced to nothing.

The remaining few secretaries handled various employee requests and handed out pay stubs until EFT became the norm and employees got PCs for work.

At IBM, some secretaries went to the chip production line for better pay at the risk of exposure to the toxic chemicals.

Then it was the progression from mainframes to minicomputers to PCs that reduced the need for tape jockeys —the guys who kept operations on the mainframe running smoothly.

There were also support personnel who printed specialized stock certificates and other important documents on offline MDS printers, and still others who bundled card decks with their output and delivered them to the GE main plant.

Technology constantly threatens your job, and you must be adaptable to change.

I have a physics degree that enabled me to work on CS, math, or physics projects in my last employment. I also transitioned my programming language proficiency from Fortran/COBOL/Assembler work to C/C++/Assembler, and then to Java/Python work with Docker included.

So take care, save your pennies in whatever matching 401K retirement plans are offered to have a better retirement once you get off the eternal Hamster Wheel.

The Twilight Zone, though dated, had a marvelous episode, "Mr. Whipple's Factory," where a machine replaced everyone. The only jobs left were the machine repair roles and the factory head.

The end of the episode showed Robbie the Robot swinging Mr Whipple's watch around. Meanwhile , Mr. Whipple was in the local bar saying, "I gave my life to the company and now look what they did."

His former employees were not amused by his situation since he didn't care about them.

---

TESTING RULES!

The need for folks to test the code will increase since AI hallucinations may be around for quite a while. Companies want to avoid legal entsnglement like a Canadian Airline had experienced when their chatbot gave false advice to a customer about funeral flights.

This can also hapen due to changing models and the modifications of system prompts that set up the models, so testers are needed to test, test and retest.

Maybe even some lawyers can get involved since they've been trained to study contracts and look for loopholes or unfavorable items. They would make betters testers in a safer than dino meat ala Jurassic Park.
 
Last edited:
  • Like
  • Informative
Likes Astronuc, jtbell, berkeman and 3 others
jedishrfu said:
The only jobs left were the machine repair roles and the factory head.
jedishrfu said:
TESTING RULES!
Yes indeed.

In the latter part of my career (network engineer, retired 2.5 years now) much of my value to the organization was in troubleshooting.

As things have become more complex over the decades, the process of troubleshooting has become deeper and more difficult. There are oh so many ways for things (including diagnostic tools) to go wrong.
 
  • Like
Likes Astronuc and TensorCalculus
A significant part of my career was in testing, first as a mainframe programmer writing scripts to test new software releases on our timesharing service and later on the batch service.

Then there were VLSI testers using PC controllers where we wrote a custom language and compiler so that the QA team could certify via test IBM’s newest chips.

Some years later, I worked as a test lead for a major IBM software project. We wrote demo code to test new features and showcase how those features can be used.

But the most recent things I’ve seen is a new wave where one AI tests another. There was a paper describing how an older LLM of the lineage taught a new LLM and passed its bad behavior via coded numbers that researchers didn’t understand.

The program can acquire a self-healing module that corrects errors on the fly, ie, directed self-modifying code, and less need for application testing.

In the future, testers may need to become subject matter experts to spot AI hallucinations.

But again, AI can replace that job, and we’re back to the days of Mr Whipple.

But testing is still the best way to go.
 
Last edited:
  • Like
Likes TensorCalculus
jedishrfu said:
A major part of my career was in testing. First as a mainframe programmer writing scripts to test new software releases on our timesharing service snd later on batch service.

Then there was VLSI testers using PC controllers where we wrote custom language and compiler so that the QA team could certify via test IBM’s newest chips.

Some years later, I worked as test-lead for a major IBM software project. We wrote demo code to test new features and showcase those features can be used.

But the most recent things i’ve seen is a new wave where one AI tests another. There was a paper describing how an older LLM of the lineage taught a newt LLM and passed its bad behavior via coded numbers that researchers didn’t understand.
Hmm: I never thought about need for testing.
Would another example of AIs testing/ training each other be Deepseek? It used some of ChatGPT's output in order to train itself and that's why at first it was identifying itself as ChatGPT (whether it was entirely trained on ChatGPT or not is a matter of debate) - distillation I think it's called. The training method used on deepseek meant that it was able to get amazing results at a fraction of the cost of it's competitors.
jedishrfu said:
Program can acquire a self healing module that corrects errors on the fly ie directed self modifying code and less need for application testing.

In the future testers may need to become subject matter experts in order to spot AI hallucinations.

But again that job can be replaced by AI and we’re back to the days of Mr Whipple.

But I still think testing is the best way to go.
Your argument seems pretty reasonable: you've convinced me that testing is the way...
 
jedishrfu said:
A major part of my career was in testing....
...
But I still think testing is the best way to go.

TensorCalculus said:
Hmm: I never thought about need for testing.

As a PhD student in physics, some of my classmates were pursuing master's degrees in computer science on the side. (I took a C++ course with them and was contemplating switching to a programming career.)
Microsoft came to do interviews. While "programmer" or "developer" seemed to be the preferred dream job, "tester" seemed to be the most common job offered to those who made it through the rounds of interviews.
 
  • Like
  • Informative
Likes TensorCalculus and jedishrfu
Yes IBM routinely hired summer interns to do product test to answer questions:

- does it install on windows, macos and linux
- does it uninstall and leave no trace
- does check for prereqs like memory, CPU, disk space
- does internationalized code work
- does it work with other products
- how much disk/memorydoes it use
 
jedishrfu said:
TESTING RULES!

The need for folks to test the code will increase since AI hallucinations may be around for quite a while.
Agreed, software testing is a very important part of software development, and the company I worked at for many years always had a "System Test" group that did QA testing on all new software releases prior to them being sent to customers. As a hardware engineer who designed a lot of the new hardware products, I worked really closely with the software developers and System Test to make sure that the product worked well and we did not ship buggy products to customers.

On many occasions we would not know initially if a problem was due to some hardware issue (me) or some software/firmware issue, so I have spent a lot of time in the lab and in meetings with SW/QA to sort things out. I've even built special observation fixtures specifically so that QA can check out ideas about what may be causing a problem. One of them traced the execution path of a uC to see what memory locations were touched when performing different tasks (logic analyzer observability was not possible in this case), and it helped them to narrow down the part of the code that was having a problem.

At one point in this company's history, they cut way back (to almost zero) the System Test group, and tried to outsource more of the software development. That bit them in the butt as many more bugs started showing up in the field, and costing them extra money in our Customer Support group and resulting in lost sales. Very soon after that there was a company-wide initiative to bring back the System Test group and make sure that each software release had adequate testing before being sent to customers.

And as a recent example of AI probably messing up customer software experiences... I needed to schedule an appointment to get my wife's Prius windshield replaced (too much time spent in sandy areas near the beach had pitted it so much that sun glare would cause very reduced visibility). I made the appointment with the largest auto glass replacement company in the US (I won't mention their name), and I made it for 10AM in Santa Cruz to minimize the hassle of having to drive through morning commute traffic. Unfortunately, the many response e-mails and texts that I received confirming my appointment and reminding me of the appointment had a mix of several different times: 10AM PST, 10AM, 11AM. Since we are now in Pacific Daylight Time (PDT), that was especially worrisome since 10AM PST = 11AM PDT. I wanted to call the repair shop to confirm that the appointment was for 10AM PDT so I would not have to wait an extra hour when I arrived, but unfortunately the automated replies did not list an actual human phone contact to call. Sigh.

Luckily my 10AM appointment went off on time, and all was good. I did mention the software bug in my Google review of the appointment, but who knows if some System Test person at <unnamed company> actually read my Google review... :wink:
 
  • Wow
Likes TensorCalculus
  • #10
jedishrfu said:
Yes IBM routinely hired summer interns to do product test to answer questions:

I said to a Microsoft recruiter that I think being a developer was more interesting than a tester because developers get to be creative.
He said that creativity is needed for testers as well.
They have to find creative ways to test the software...
can the tester somehow break the software by doing some legal or illegal operations that the developer didn't anticipate?
So, that changed the way I saw testers.
 
  • Like
  • Informative
Likes symbolipoint, berkeman and TensorCalculus
  • #11
Yes, we had a couple of gifted testers. One coworker was tasked with testing internationalism code before the translators got involved. Basically that meant that all displayed text would appear in the selected language and that there should be no English text anywhere.

All messages were stored in properties files. She created her own language, the Martian locale where she prefixed each English message with a X prefix and then launched the application running a battery of testcases. Sure enough there were a few error messages missing the X prefix that hadn't been properly internationalized.

Another coworker while testing a demo application found itfailed to properly connect to the network using the products api. He kept tugging at this defect for weeks filing defect reports that got rejected by the development team. But he persisted and finally the developers reluctantly and sheepishly admitted that there was a serious design flaw.

---

In my case as test lead, I took an interest in the temporary install code. Other folks had developed scripts to download a build and then tweaked their system adding parameters to the environment. Builds also depended on specific prereq products at a specific version.

I wrote a fancy awk script that showed all available builds, downloaded and unzipped them ready for test or for development and verified that necessary changes were made. Our developer team would routinely send out group letters saying new parameters were added or others needed to be tweaked in the environment. They expected us to remember every letter and setup our test machines accordingly.

Being the team lead I wanted to avoid false defect reports that were due to missing some parameter. This script gave my team a consistent test machine environment. Developers liked the script too because it verified that the prerequisites were installed and the environment was setup correctly.

However one person hated my script, our project lead who took it upon himself to write the Installshield code and found that the team preferred my script. He disliked it because of all my checks identifying what parameters needed to be setup among other things like disk space and memory were not a part of his installshield code. I became the target of his wrath.

---

The one thing about being a tester was that the developers looked down on you and you became the scapegoat for every scheduling delay. They disliked the test team finding bugs that they now had to investigate and fix or reject the report. The other thing was that the test team was blamed when schedules weren't met because we found too many defects, or the test team was blamed when a defect slipped through our net and was found by a customer.

This may change as developer roles will be diminished and tester roles will flourish until...

...the AI catches up with them too.

For now TESTERS RULE!
 
  • Like
  • Informative
Likes berkeman, TensorCalculus and robphy
  • #12
I just realised that my mom is a WiFi tester and has been for the past 10 years: and I hadn't known until I asked her about her opinion on them
whoops.
jedishrfu said:
Yes IBM routinely hired summer interns to do product test to answer questions:

- does it install on windows, macos and linux
- does it uninstall and leave no trace
- does check for prereqs like memory, CPU, disk space
- does internationalized code work
- does it work with other products
- how much disk/memorydoes it use
I feel sorry for those summer interns :(
Don't get me wrong, it's nice to get an internship, but that sounds like a pretty mundane one. Maybe there's creativity required in testing... but not this type...
berkeman said:
Agreed, software testing is a very important part of software development, and the company I worked at for many years always had a "System Test" group that did QA testing on all new software releases prior to them being sent to customers. As a hardware engineer who designed a lot of the new hardware products, I worked really closely with the software developers and System Test to make sure that the product worked well and we did not ship buggy products to customers.

On many occasions we would not know initially if a problem was due to some hardware issue (me) or some software/firmware issue, so I have spent a lot of time in the lab and in meetings with SW/QA to sort things out. I've even built special observation fixtures specifically so that QA can check out ideas about what may be causing a problem. One of them traced the execution path of a uC to see what memory locations were touched when performing different tasks (logic analyzer observability was not possible in this case), and it helped them to narrow down the part of the code that was having a problem.

At one point in this company's history, they cut way back (to almost zero) the System Test group, and tried to outsource more of the software development. That bit them in the butt as many more bugs started showing up in the field, and costing them extra money in our Customer Support group and resulting in lost sales. Very soon after that there was a company-wide initiative to bring back the System Test group and make sure that each software release had adequate testing before being sent to customers.
Why would they get rid of it in the first place? It makes no sense in my head?
 
  • #13
TensorCalculus said:
Why would they get rid of it in the first place? It makes no sense in my head?
That's the sort of thing that happens on IT projects!
 
  • Wow
Likes TensorCalculus
  • #15
TensorCalculus said:
Why would they get rid of it in the first place? It makes no sense in my head?
To try to save money on engineering headcount and other related expenses. Management's idea was to outsource more of the code development to places like India and Hungary, and count on those companies to do their own System Testing. But their idea of testing was not up to the standards we had here (see jedi's comments above), and many more bugs started making it through to customers. Not a good situation.
 
  • Informative
Likes symbolipoint and TensorCalculus
  • #16
berkeman said:
To try to save money on engineering headcount and other related expenses. Management's idea was to outsource more of the code development to places like India and Hungary, and count on those companies to do their own System Testing. But their idea of testing was not up to the standards we had here (see jedi's comments above), and many more bugs started making it through to customers. Not a good situation.
Oh that makes a bit more sense now: I can see why they would have thought to do that.
I don't know about Hungarians, but there are definitely a lot of talented Indian coders: there's been multiple instances of companies claiming to be some sort of AI or new automated technology... but then they ended up just being anonymous Indians...
 
  • #17
TensorCalculus said:
multiple instances of companies claiming to be some sort of AI or new automated technology... but then they ended up just being anonymous Indians
So, it seems "Anonymous Indians" is a "sort of AI".
 
  • Haha
Likes fresh_42 and TensorCalculus
  • #18
I once heard that defect we found cost the company $20 to fix.

If found in a beta it was $200 to fix.

If found by a customer ie many customers a patch had to be coded, tested, and deployed for about $2000.

So that was the incentive to test well.
 
  • Like
  • Wow
Likes berkeman and TensorCalculus
  • #19
Concerning cutbacks, I once worked on a significant project with a satellite developer team from California. After the project, they were given awards for their excellence, and sadly disbanded and were let go.

The company gave away some of its best developers , who then went to the company’s competitors.
 
Last edited:
  • Informative
Likes symbolipoint
  • #20
jedishrfu said:
Wrt to cutbacks I once worked a major project a satellite developer team from California. At the completion of the project, they given awards for their excellence and sadly disbanded and let go.

The company gave away some its best develops that who then went to the company’s competitors.
The company's own fault... disbanding them.
 
  • #21
jedishrfu said:
an older LLM of the lineage taught a newt LLM and passed its bad behavior via coded numbers that researchers didn’t understand.
I didn't think a newt was smart enough to learn an LLM, let alone understand coded numbers ...

Bad, bad newt!
 
  • Haha
Likes TensorCalculus, berkeman and jedishrfu
  • #23
jedishrfu said:
I once heard that defect we found cost the company $20 to fix.

If found in a beta it was $200 to fix.

If found by a customer ie many customers a patch had to be coded, tested, and deployed for about $2000.

So that was the incentive to test well.
It's like the Powers of Ten for not-testing-well. :wink:
 
  • Haha
Likes TensorCalculus and jedishrfu
  • #24
jedishrfu said:
I once heard that defect we found cost the company $20 to fix.

If found in a beta it was $200 to fix.

If found by a customer ie many customers a patch had to be coded, tested, and deployed for about $2000.

So that was the incentive to test well.
It all depends who's paying. If you are clever enough you get the customer to pay for the bug fixes!
 
  • #25
PeroK said:
It all depends who's paying. If you are clever enough you get the customer to pay for the bug fixes!
I was referring to how my company estimated it internally so of course the customer pays otherwise you won’t be in business for long.
 
  • #26
PeroK said:
It all depends who's paying. If you are clever enough you get the customer to pay for the bug fixes!
Is that the norm for government digitization contracts, since
1. govt has deep pockets,
2. the actual end customer will beef at the gov't, not the coding company.

I can understand the overbudget due to complexities, but not the sub-standard product.
One more famous one was the digitization of public service employee payroll - Phoenix Pay - that resulted in underpay, overpay, no pay for a segment of employees.
Unfixable, so now the gov't of Canada is moving over to Ceridian's Dayforce.
 
  • #27
@jedishrfu can you give concrete examples for how an AI hallucinate or what that means in practice.

I have only use LLM for understanding steps in math text be it in textbooks or a journal article. I have used a LLM to write me a script on text processing of latex code for posting on here. For the math tasks, I always asked the AI to give relevant references. If it can't do a computations, it will say something like such and such expressions calculatio seems complex and then it will just say something along the line of after finishing the caluculation, I will get the conclusion I am looking for.

I would check with two or three LLM with the same question queries to compare the results. In all cases, I ask the LLM to provide references in the form of either books, online notes, or scholarly articles.

I am not an expert in the mathematical theory of LLM, nor how such learning model couple with other ML algorithms can be consider as the equivalent of making valid inferences. I only can try to make sure I can trace back to where AI goes and looks online to check for sources where it tries to derive its conclusion. This way, if it did make mistakes, I know at least what not to do.
 
  • #28
A few months ago, I was researching Julia a fairly new programming language from MIT.

ChatGPT gave me a nice summary if Julia along with some urls to tutorial sites. None of them worked they were all fake.

Later i asked for citations on a topic and have of them didnt exists. Books that were mentioned didn’t exist as did the authors.

It has improved with v4 and v5 but I'm sure given enough time I’ll spot some more stuff. I did see generated code had errors ut when pointed out the LLM fixed the issue.
 
  • #29
@jedishrfu Recently I asked the LLM to give me reference on how it understood something in a paper, and gave me a link to some online university notes. The notes existed online once upon a time, but that professor took it down, or the url has changed.
Maybe in ten years or so, AI would be decently reliable. I blame it on the part of the math community that is active on social media who has been actively promoting these LLM technologies. They keep focusing on how well these models have done in solving this years IMO problems or how well whichever one did in some higher math bench mark results. I am not sure being able to solve difficult competition level math problems are a good bench mark for an all well rounded LLM. I mean in the sense that it can give accurate cooking advice on how to make certain dishes to deriving some physical law purely from observational data, to solving some complicated chemistry problem, then giving financial advice to someone who needs financial advice in making important life changing financial decisions. All of these things I mentioned for a human engage different part of their cognitive abilities. Not sure if any LLM is there yet.
 
  • #30
@jedishrfu I was going to replied in my thread you just replied to, but I will replied here since I think @TensorCalculus would want to hear this. I am not sure if you two have heard from Microsoft, someone important and senior from there in an interview said that there is no need to learn to code anymore because AI will do it all for us. Well, we all know of that recent tea app hack, which was built via "vibe coding", according to the rumor mill on the internet. Anyways, I know someone from Microsoft who is high up on the project management totem pole. I can't say if it is at the C-suite level. This person gave me similar advice not quite at the level of what that other person said in public, but similar in spirt and messaging and thar is i don't need to actually know how to code well anymore, but it is important to know how to read it since you can always get the AI to do it for you. I was asking this person about C and assembly language programming. This person's advice was not only for assembly or C, but for coding in whichever language. Oh this person's technical background is in AI also.

I felt very uncomfortable with the implication of that person's advice. Why? My understanding is that programming itself as a subset of compute science is a skill set that, if my impression is correct needs some decent amount of practice initially to be able to do it and then some more practice after to get good at it. Kind of like swimming, riding a bike, playing an instrument, etc. I could be wrong in my analogy and please feel free to correct me. The thing is, assuming the AI and LLMs get to a point where they don't hallucinate or their chances of doing it are extremely low. There is always the possibility of one asking the AI to build some software where the way was built containd vulnerability. I am not sure just because someone being able to read various source code well can spot those vulnerabilities that allows for black hat hacking. I am also making the assumption that LLMs are trained on code writing from what is publicily made available like that freezer box for storing fish sounding name Git-Something. Either one of you could chime in if you like. I just know that relinquishing many of the code writing task to a machine, we loses something as a result. Maybe is creativity, speed, or an eye for knowing how to build something safe. Also, it is more than compare to the case of kids using calculator to do their math homework instead of learning it to do it onmmusing pencil and paper or students learning to get good at doing integrals by hand instead of using a CAS system.
 
  • #31
Think of it this way: the world is changing. At one time, engineers used pencil and paper, then sliderules, before advancing to computers, compact digital calculators, desktop machines, with the latest CAD software and 3D printing tools.

So now we get the AI to do the programming, describing what kind of application we want, add in the various attributes it should have, and the type of GUI display it should generate. You are coding with words, and a machine is producing a more detailed but human-readable code in whatever language we've learned in school, so that we can inspect and test the code to make sure it functions correctly.

It's still programming, but we've completed it using some powerful new tools which will free us up to do something new and exciting. We now get to become master testers who find bugs in AI-generated code and then collaborate with an AI expert to fix the AI tool.

In a sense, it's like Spock from Star Trek when he talks to the computer, even though he may already know the answer or how to get it.

Only time will tell how the industry will evolve, who will lose their jobs, and who will gain the skills for a new role. Many issues remain unresolved, including how to manage legacy code and whether we need more than one programming language. So is it FORTRAN, COBOL, Java, or Python? Each has its pros and cons.

One language to rule them all.
 
  • #32
@jedishrfu if we are going to use star trek as an example reference to a plausible future scenario concerning AI, actually star trek computers are not sentient AIs...but i digress. Anyways, in star trek, people still know how to code and they do it all the time. They don't just let the computer to do all the work for them. Just look at voyager and Deep space 9. In both series, engineers and science officer can come up with algorithms and do intricate coding task since the are dealing with station or star ship system components. They have to know how to do it well because they know if it has flaws, very bad and fatal things can happen. Those are very different then what is happening now. We can have people designing software system without any kind of computer science or engineering training. What if somebody gets a LLM to design some innocent web app that is part of say a bank. But this person doesn't know programming enough to detect vulnerability because of what he did not ask the LLM to include. The people at the bank who this person works with approves it. Later on, this web apps vulnerabilities allow hackers to gain access to customer data. Oh I forgot to mention that the people who worked with the person wmthat vibe code their way into making this web app takes it for granted that the LLM will take care of safety issues, etc. I mean there is a lot of trust based on implicit assumptions.

Star Trek computers only does what you ask it to do, nothing more, nothing less. We are discounting Data or Spock or the Binars or any of the intelligent thinking androids which appeared in various series.
 
  • #33
This discussion reminds me of the role of robots in the industrial world of manufacturing. Very few of us can afford a hand-built car. Did that cost jobs during the last century? Sure, and not a few. But what would have been the alternative?

And if we look closer into the industrial history, we will certainly find even more examples of machines replacing jobs, and I think so will AI. What they cannot replace is in my opinion the understanding aspect. AIs are stupid in a way. They gather - and as @jedishrfu pointed out in the example about Julia - even invent evidence. And so are the robots in the auto industry. They do not know why they are welding parts.
 
  • #34
@elias001 you are missing my point. Its in the way Spock interfaced with the computer. He talked to it.

Its not unlike ChatGPT where you write to it in a conversational way. This is exactly where we are headed. If properly trained youll be able to use your native language to talk to the computer.

Technology continues to evolve. The iPhone is a great example of multiple devices becoming one device. Programming languages follow a similar arc with near or at the top.

Engineers build better tools to help build better tools. Jobs will change but humanity will adapt or die out trying.
 
  • #35
@jedishrfu but if we relied on these LLM too much, will less people have the motivation to learn to code?
 
  • #36
elias001 said:
@jedishrfu I was going to replied in my thread you just replied to, but I will replied here since I think @TensorCalculus would want to hear this. I am not sure if you two have heard from Microsoft, someone important and senior from there in an interview said that there is no need to learn to code anymore because AI will do it all for us. Well, we all know of that recent tea app hack, which was built via "vibe coding", according to the rumor mill on the internet. Anyways, I know someone from Microsoft who is high up on the project management totem pole. I can't say if it is at the C-suite level. This person gave me similar advice not quite at the level of what that other person said in public, but similar in spirt and messaging and thar is i don't need to actually know how to code well anymore, but it is important to know how to read it since you can always get the AI to do it for you. I was asking this person about C and assembly language programming. This person's advice was not only for assembly or C, but for coding in whichever language. Oh this person's technical background is in AI also.

I felt very uncomfortable with the implication of that person's advice. Why? My understanding is that programming itself as a subset of compute science is a skill set that, if my impression is correct needs some decent amount of practice initially to be able to do it and then some more practice after to get good at it. Kind of like swimming, riding a bike, playing an instrument, etc. I could be wrong in my analogy and please feel free to correct me. The thing is, assuming the AI and LLMs get to a point where they don't hallucinate or their chances of doing it are extremely low. There is always the possibility of one asking the AI to build some software where the way was built containd vulnerability. I am not sure just because someone being able to read various source code well can spot those vulnerabilities that allows for black hat hacking. I am also making the assumption that LLMs are trained on code writing from what is publicily made available like that freezer box for storing fish sounding name Git-Something. Either one of you could chime in if you like. I just know that relinquishing many of the code writing task to a machine, we loses something as a result. Maybe is creativity, speed, or an eye for knowing how to build something safe. Also, it is more than compare to the case of kids using calculator to do their math homework instead of learning it to do it onmmusing pencil and paper or students learning to get good at doing integrals by hand instead of using a CAS system.
It's scary yes, but all that it will mean is that learning to code in the way that we will know it will become redundant.
For now, AI isn't perfect at all. Even "vibecoding" requires some level of coding knowledge: to be able to know what to prompt the AI for, etc etc. And with AI, people will focus on other things: as said before, testing, or developing the AIs themselves. Or working in prompt science... I could name more. My dad is a developer and is in fact encouraged to use AI: but if the AI could do everything on it's own, why would they employ him? There still needs to be a human debugging, testing, navigating the project. Especially when the project is a bigger one.
The problem right now is for those who have been taught coding in a way that didn't anticipate AI: they're losing jobs because they've learnt skills that AI can now do. Losing jobs in factories to automation hinders jobs, but also opens up more.
elias001 said:
@jedishrfu but if we relied on these LLM too much, will less people have the motivation to learn to code?
Probably. But the skill of knowing how to write basic code has also become equally less valuable as a result of LLMs.
 
  • #37
@TensorCalculus Hackers from all three sides still has to know very basic coding skills, as well as people working with embedded systems? But do AI write safer code than a human? I am assuming it is not 100% safe. So hackers of the future just need to know how a specific AI works in terms of its code base and finding vulnerabilities that way? Also what is priority science? Please I don't want to hear about spending foutlrvyears in university studying about how to ask AI questions. I mean imagine there is a degree just for that.
 
  • #38
Think of AI as software. You have to deal with software and how to use it in your studying. It may take years to be really good in using a specific sophisticated software. Same with AI. (for now)
 
  • #39
elias001 said:
@TensorCalculus Hackers from all three sides still has to know very basic coding skills
True. Unethical hackers will have problems trying to get most LLMs to write malicious code too. But they will certainly be assisted by AI.
elias001 said:
But do AI write safer code than a human? I am assuming it is not 100% safe. So hackers of the future just need to know how a specific AI works in terms of its code base and finding vulnerabilities that way?
Of course there will always be vulnerabilities in code. And you're right: we will still need ethical hackers and testers to make sure these vulnerabilities are minimal. Though much of hacking isn't even about coding, it's about exploiting human weaknesses too. The most common type of cyberattack is phishing (source), which takes advantage of tricking people themselves into giving away info, not exploiting vulnerabilities through code.
As to whether AI writes "safer" code, I can imagine it varies on a case-to-case basis.
elias001 said:
Also what is priority science? Please I don't want to hear about spending foutlrvyears in university studying about how to ask AI questions. I mean imagine there is a degree just for that.
Prompt science is the study of how prompts change the outputs of AI and which prompts are better... worse... etc etc. If I shove a whole ton of unnecessary info into my prompt then it will of course get a lower quality output from the LLM than if I had a concise prompt that had exactly the information the AI needed.
 
  • #40
@TensorCalculus hackers don't use align AI. Look up white rabbit neo, wormGPT, hackerGPT or ask grok or gemini about a list of abliterated/uncensored LLMs. WhiteRabbitNeo and HackerGpt try to be humorous with their interactions. Do you see any of the LLM as your companions? I don't know why people call them that.
 
  • #41
elias001 said:
@TensorCalculus hackers don't use align AI. Look up white rabbit neo, wormGPT, hackerGPT or ask grok or gemini about a list of abliterated/uncensored LLMs. WhiteRabbitNeo and HackerGpt try to be humorous with their interactions. Do you see any of the LLM as your companions? I don't know why people call them that.
I know uncensored ones exist out there, hence the "most" :woot:
No, I don't. I'm not a huge LLM fan, though I can admit to using them from time to time. I prefer coding by myself simply because I find it fun.
 
  • #42
@TensorCalculus your profile state that you are from England. Is everything ok there? i heard there were massive political protests there. By the way, a lot of dystopian films I have seen all came from UK.
 
  • #43
elias001 said:
@TensorCalculus your profile state that you are from England. Is everything ok there? i heard there were massive political protests there. By the way, a lot of dystopian films I have seen all came from UK.
Bit off topic...
We're fine here. I live in a small town near one of the most well-off cities in England, it's perfectly safe here. When did you hear that? There were protests a while ago about immigration but that was a short-lived, one-off thing...
If you want to continue this conversation though, since it's not too related to the thread, can I suggest you move to DMs?
 
  • #44
Well, a major tech company that someone I know works in [purposely being vague] just made a few testing/checking based roles redundant because of AI. No one is going to work that job anymore across the whole company.
It's getting worse by the day...
 
  • #45
  • Like
Likes TensorCalculus
  • #46
I've seen the second one: but not the first, thanks!
I don't think anyone can predict when/if we will ever get an AI bubble or not though...
 
  • #47
@TensorCalculus I have a feeling Altman knows something is up and he is trying to give everyone a hint ahead of time. By the way, two of the original AI researchers at Open AI that worked with Altman, i think they were even still at open AI when Altman was ousted after Microsoft agree to funded them and everyone at Open AI all threaten to quit. Two of those AI researchers went to my university and I knew one of them as he was finishing his math P.h.D and the other one i spoke to slightly before he went off and worked with Hinton. Some people after their math ph.d pivoted to machine learning. Have to find ways to pay the bills somehow.
 
  • Wow
Likes TensorCalculus
  • #48
@TensorCalculus I just saw this: on my feed. It sounds like is economically safer to be a mechanical/electrical/materials engineer. I mean a lot of blue collar jobs have to use technology and keep up with new technology development. Engineers play a big role in that.
 
  • #49
@jedishrfu i think there is something you have not brought up in your original post and someone like myself and others are wondering the following questions. Before ChatGPT arrived on the scene, there was deep learning, big data, artificial neural networks. My understanding and to the best of my memory, these AI/AI related conceps that eventually had their own courses in various computer science departments, people can only learn about them via an university CS department. While all that time, Software engineering has become a distinct specialization within computer science. But the so called "Data scientist/engineer" has not been formally recognized as something as distinct as say a software engineer. Nowadays AI/machine learning/Data scientists are coming into its own. One can learn, study and consider as its own separate career path not only by starting and going through a CS department, but through say the math/stats/engineering department. So for software engineers, is it hard for them to do additional training in data science and AI by taking courses in AI and data science at a typical university. I am asking about this is because AI/data science/machine learning/artifical neural networks has software engineering and various computer science as its foundations. I am basically trying to ask that it might not be all doom and gloom.
 
  • #50
hello sorry for my late reply! My silly brain thought I'd replied to this but I hadn't :cry:
elias001 said:
@TensorCalculus I have a feeling Altman knows something is up and he is trying to give everyone a hint ahead of time. By the way, two of the original AI researchers at Open AI that worked with Altman, i think they were even still at open AI when Altman was ousted after Microsoft agree to funded them and everyone at Open AI all threaten to quit. Two of those AI researchers went to my university and I knew one of them as he was finishing his math P.h.D and the other one i spoke to slightly before he went off and worked with Hinton. Some people after their math ph.d pivoted to machine learning. Have to find ways to pay the bills somehow.
I mean you can be an AI researcher, there are many of them out there, but you still can't predict the future... though that does make me trust the video more.
Very cool that you spoke to them!
elias001 said:
@TensorCalculus I just saw this: on my feed. It sounds like is economically safer to be a mechanical/electrical/materials engineer. I mean a lot of blue collar jobs have to use technology and keep up with new technology development. Engineers play a big role in that.

Interesting! I think we will have to wait and see how the value of the skills that CS majors are taught shifts. I am sure that the curriculum will try it's best to adapt to the changes. Why do you say mechanical/electrical/materials engineer specifically?
 

Similar threads

Replies
22
Views
22K
Replies
73
Views
10K
Back
Top