A Crisis for Newly Minted CompSci Majors -- entry level jobs gone

  • Thread starter Thread starter jedishrfu
  • Start date Start date
Click For Summary
Fresh computer science graduates are facing a significant employment crisis, with unemployment rates between 6.1% and 7.5%, far exceeding those of other majors. The rise of AI has led to the automation of many entry-level coding jobs, while high-paying positions in machine learning are increasingly reserved for more experienced candidates. The perceived value of computer science degrees is declining, as some successful individuals are finding lucrative opportunities without formal degrees. Experts suggest that pursuing degrees in physical sciences may be more advantageous in the current job market. The demand for software testing is expected to grow due to the complexities introduced by AI, highlighting the need for adaptability in the tech industry.
Messages
15,557
Reaction score
10,296
https://techcrunch.com/2025/08/10/the-computer-science-dream-has-become-a-nightmare

The computer science dream has become a nightmare:

The coding-equals-prosperity promise has officially collapsed.

Fresh computer science graduates are facing unemployment rates of 6.1% to 7.5% — more than double what biology and art history majors are experiencing, according to a recent Federal Reserve Bank of New York study. A crushing New York Times piece highlights what’s happening on the ground.
 
  • Like
  • Informative
Likes bhobba, Yael129, TensorCalculus and 1 other person
Technology news on Phys.org
jedishrfu said:
Both my parents are software engineers and I have many friends who aspire to be software engineers: so I hear about this a lot. It is a bit of a nightmare...
Even my parents use AI all the time nowadays: in fact AI use is sort of mandated by their companies.
AI has been taking all of the entry-level jobs: but it's also been making new jobs. Jobs in ML can be super high paying and are becoming more and more abundant as AI becomes more widespread. So... maybe all is not lost. (But those jobs will likely go to those more experienced with coding and not people fresh out of Uni now I think about it)
The value of CS degrees is also dropping. I know a kid, who was honestly really, really brilliant, who dropped out of Imperial College and is now building a business. He also got himself some job offers: as a 19 year old with no degree, and was able to get some really high-paying ones, jobs that even CS graduates struggle to get.

On the bright side: people say that physics/maths degrees are the ones to pursue rather than CS in this age: NVIDIA's CEO Jensen Huang said that if he had to pick a degree to study in this new age, it would be physical sciences and not software sciences.
 
  • Like
  • Agree
Likes Astronuc, berkeman, russ_watters and 2 others
Yes, it's sad that this has happened. Looking over my 45 years as a software engineer and having played with AI, I can understand why companies are doing this.

In the 1970s, there were keypunch operators responsible for handling the workload of inputting data into the computer and transcribing what programmers wrote on coding sheets. Then the TTY was introduced, and they experienced a reduction in their workload when programmers began using terminals and modems. As technology advanced, data was transferred electronically, significantly decreasing their data input load.

Also in the 1970s, nearly every manager had a secretary, and then a few years later, secretaries began handling multiple managers as the need to type letters and documentation decreased.

Some executive bosses would act as lazy editors, requiring the secretary to keep revising their letters. Secretaries were not happy campers when that happened.

Then the Wang word processor came out, and secretaries were elated because they didn't have to retype a letter to fix a mistake. It looked bad if there was even one mistake in correspondence with a client.

But the Wang revolution was beginning a new wave of change.

The PC revolution came and allowed managers to type their letters, and secretarial work was reduced to nothing.

The remaining few secretaries handled various employee requests and handed out pay stubs until EFT became the norm and employees got PCs for work.

At IBM, some secretaries went to the chip production line for better pay at the risk of exposure to the toxic chemicals.

Then it was the progression from mainframes to minicomputers to PCs that reduced the need for tape jockeys —the guys who kept operations on the mainframe running smoothly.

There were also support personnel who printed specialized stock certificates and other important documents on offline MDS printers, and still others who bundled card decks with their output and delivered them to the GE main plant.

Technology constantly threatens your job, and you must be adaptable to change.

I have a physics degree that enabled me to work on CS, math, or physics projects in my last employment. I also transitioned my programming language proficiency from Fortran/COBOL/Assembler work to C/C++/Assembler, and then to Java/Python work with Docker included.

So take care, save your pennies in whatever matching 401K retirement plans are offered to have a better retirement once you get off the eternal Hamster Wheel.

The Twilight Zone, though dated, had a marvelous episode, "Mr. Whipple's Factory," where a machine replaced everyone. The only jobs left were the machine repair roles and the factory head.

The end of the episode showed Robbie the Robot swinging Mr Whipple's watch around. Meanwhile , Mr. Whipple was in the local bar saying, "I gave my life to the company and now look what they did."

His former employees were not amused by his situation since he didn't care about them.

---

TESTING RULES!

The need for folks to test the code will increase since AI hallucinations may be around for quite a while. Companies want to avoid legal entsnglement like a Canadian Airline had experienced when their chatbot gave false advice to a customer about funeral flights.

This can also hapen due to changing models and the modifications of system prompts that set up the models, so testers are needed to test, test and retest.

Maybe even some lawyers can get involved since they've been trained to study contracts and look for loopholes or unfavorable items. They would make betters testers in a safer than dino meat ala Jurassic Park.
 
Last edited:
  • Like
  • Informative
Likes difalcojr, Astronuc, jtbell and 4 others
jedishrfu said:
The only jobs left were the machine repair roles and the factory head.
jedishrfu said:
TESTING RULES!
Yes indeed.

In the latter part of my career (network engineer, retired 2.5 years now) much of my value to the organization was in troubleshooting.

As things have become more complex over the decades, the process of troubleshooting has become deeper and more difficult. There are oh so many ways for things (including diagnostic tools) to go wrong.
 
  • Like
Likes Astronuc and TensorCalculus
A significant part of my career was in testing, first as a mainframe programmer writing scripts to test new software releases on our timesharing service and later on the batch service.

Then there were VLSI testers using PC controllers where we wrote a custom language and compiler so that the QA team could certify via test IBM’s newest chips.

Some years later, I worked as a test lead for a major IBM software project. We wrote demo code to test new features and showcase how those features can be used.

But the most recent things I’ve seen is a new wave where one AI tests another. There was a paper describing how an older LLM of the lineage taught a new LLM and passed its bad behavior via coded numbers that researchers didn’t understand.

The program can acquire a self-healing module that corrects errors on the fly, ie, directed self-modifying code, and less need for application testing.

In the future, testers may need to become subject matter experts to spot AI hallucinations.

But again, AI can replace that job, and we’re back to the days of Mr Whipple.

But testing is still the best way to go.
 
Last edited:
  • Like
Likes TensorCalculus
jedishrfu said:
A major part of my career was in testing. First as a mainframe programmer writing scripts to test new software releases on our timesharing service snd later on batch service.

Then there was VLSI testers using PC controllers where we wrote custom language and compiler so that the QA team could certify via test IBM’s newest chips.

Some years later, I worked as test-lead for a major IBM software project. We wrote demo code to test new features and showcase those features can be used.

But the most recent things i’ve seen is a new wave where one AI tests another. There was a paper describing how an older LLM of the lineage taught a newt LLM and passed its bad behavior via coded numbers that researchers didn’t understand.
Hmm: I never thought about need for testing.
Would another example of AIs testing/ training each other be Deepseek? It used some of ChatGPT's output in order to train itself and that's why at first it was identifying itself as ChatGPT (whether it was entirely trained on ChatGPT or not is a matter of debate) - distillation I think it's called. The training method used on deepseek meant that it was able to get amazing results at a fraction of the cost of it's competitors.
jedishrfu said:
Program can acquire a self healing module that corrects errors on the fly ie directed self modifying code and less need for application testing.

In the future testers may need to become subject matter experts in order to spot AI hallucinations.

But again that job can be replaced by AI and we’re back to the days of Mr Whipple.

But I still think testing is the best way to go.
Your argument seems pretty reasonable: you've convinced me that testing is the way...
 
jedishrfu said:
A major part of my career was in testing....
...
But I still think testing is the best way to go.

TensorCalculus said:
Hmm: I never thought about need for testing.

As a PhD student in physics, some of my classmates were pursuing master's degrees in computer science on the side. (I took a C++ course with them and was contemplating switching to a programming career.)
Microsoft came to do interviews. While "programmer" or "developer" seemed to be the preferred dream job, "tester" seemed to be the most common job offered to those who made it through the rounds of interviews.
 
  • Like
  • Informative
Likes TensorCalculus and jedishrfu
Yes IBM routinely hired summer interns to do product test to answer questions:

- does it install on windows, macos and linux
- does it uninstall and leave no trace
- does check for prereqs like memory, CPU, disk space
- does internationalized code work
- does it work with other products
- how much disk/memorydoes it use
 
jedishrfu said:
TESTING RULES!

The need for folks to test the code will increase since AI hallucinations may be around for quite a while.
Agreed, software testing is a very important part of software development, and the company I worked at for many years always had a "System Test" group that did QA testing on all new software releases prior to them being sent to customers. As a hardware engineer who designed a lot of the new hardware products, I worked really closely with the software developers and System Test to make sure that the product worked well and we did not ship buggy products to customers.

On many occasions we would not know initially if a problem was due to some hardware issue (me) or some software/firmware issue, so I have spent a lot of time in the lab and in meetings with SW/QA to sort things out. I've even built special observation fixtures specifically so that QA can check out ideas about what may be causing a problem. One of them traced the execution path of a uC to see what memory locations were touched when performing different tasks (logic analyzer observability was not possible in this case), and it helped them to narrow down the part of the code that was having a problem.

At one point in this company's history, they cut way back (to almost zero) the System Test group, and tried to outsource more of the software development. That bit them in the butt as many more bugs started showing up in the field, and costing them extra money in our Customer Support group and resulting in lost sales. Very soon after that there was a company-wide initiative to bring back the System Test group and make sure that each software release had adequate testing before being sent to customers.

And as a recent example of AI probably messing up customer software experiences... I needed to schedule an appointment to get my wife's Prius windshield replaced (too much time spent in sandy areas near the beach had pitted it so much that sun glare would cause very reduced visibility). I made the appointment with the largest auto glass replacement company in the US (I won't mention their name), and I made it for 10AM in Santa Cruz to minimize the hassle of having to drive through morning commute traffic. Unfortunately, the many response e-mails and texts that I received confirming my appointment and reminding me of the appointment had a mix of several different times: 10AM PST, 10AM, 11AM. Since we are now in Pacific Daylight Time (PDT), that was especially worrisome since 10AM PST = 11AM PDT. I wanted to call the repair shop to confirm that the appointment was for 10AM PDT so I would not have to wait an extra hour when I arrived, but unfortunately the automated replies did not list an actual human phone contact to call. Sigh.

Luckily my 10AM appointment went off on time, and all was good. I did mention the software bug in my Google review of the appointment, but who knows if some System Test person at <unnamed company> actually read my Google review... :wink:
 
  • Wow
Likes TensorCalculus
  • #10
jedishrfu said:
Yes IBM routinely hired summer interns to do product test to answer questions:

I said to a Microsoft recruiter that I think being a developer was more interesting than a tester because developers get to be creative.
He said that creativity is needed for testers as well.
They have to find creative ways to test the software...
can the tester somehow break the software by doing some legal or illegal operations that the developer didn't anticipate?
So, that changed the way I saw testers.
 
  • Like
  • Informative
Likes symbolipoint, berkeman and TensorCalculus
  • #11
Yes, we had a couple of gifted testers. One coworker was tasked with testing internationalism code before the translators got involved. Basically that meant that all displayed text would appear in the selected language and that there should be no English text anywhere.

All messages were stored in properties files. She created her own language, the Martian locale where she prefixed each English message with a X prefix and then launched the application running a battery of testcases. Sure enough there were a few error messages missing the X prefix that hadn't been properly internationalized.

Another coworker while testing a demo application found itfailed to properly connect to the network using the products api. He kept tugging at this defect for weeks filing defect reports that got rejected by the development team. But he persisted and finally the developers reluctantly and sheepishly admitted that there was a serious design flaw.

---

In my case as test lead, I took an interest in the temporary install code. Other folks had developed scripts to download a build and then tweaked their system adding parameters to the environment. Builds also depended on specific prereq products at a specific version.

I wrote a fancy awk script that showed all available builds, downloaded and unzipped them ready for test or for development and verified that necessary changes were made. Our developer team would routinely send out group letters saying new parameters were added or others needed to be tweaked in the environment. They expected us to remember every letter and setup our test machines accordingly.

Being the team lead I wanted to avoid false defect reports that were due to missing some parameter. This script gave my team a consistent test machine environment. Developers liked the script too because it verified that the prerequisites were installed and the environment was setup correctly.

However one person hated my script, our project lead who took it upon himself to write the Installshield code and found that the team preferred my script. He disliked it because of all my checks identifying what parameters needed to be setup among other things like disk space and memory were not a part of his installshield code. I became the target of his wrath.

---

The one thing about being a tester was that the developers looked down on you and you became the scapegoat for every scheduling delay. They disliked the test team finding bugs that they now had to investigate and fix or reject the report. The other thing was that the test team was blamed when schedules weren't met because we found too many defects, or the test team was blamed when a defect slipped through our net and was found by a customer.

This may change as developer roles will be diminished and tester roles will flourish until...

...the AI catches up with them too.

For now TESTERS RULE!
 
  • Like
  • Informative
Likes berkeman, TensorCalculus and robphy
  • #12
I just realised that my mom is a WiFi tester and has been for the past 10 years: and I hadn't known until I asked her about her opinion on them
whoops.
jedishrfu said:
Yes IBM routinely hired summer interns to do product test to answer questions:

- does it install on windows, macos and linux
- does it uninstall and leave no trace
- does check for prereqs like memory, CPU, disk space
- does internationalized code work
- does it work with other products
- how much disk/memorydoes it use
I feel sorry for those summer interns :(
Don't get me wrong, it's nice to get an internship, but that sounds like a pretty mundane one. Maybe there's creativity required in testing... but not this type...
berkeman said:
Agreed, software testing is a very important part of software development, and the company I worked at for many years always had a "System Test" group that did QA testing on all new software releases prior to them being sent to customers. As a hardware engineer who designed a lot of the new hardware products, I worked really closely with the software developers and System Test to make sure that the product worked well and we did not ship buggy products to customers.

On many occasions we would not know initially if a problem was due to some hardware issue (me) or some software/firmware issue, so I have spent a lot of time in the lab and in meetings with SW/QA to sort things out. I've even built special observation fixtures specifically so that QA can check out ideas about what may be causing a problem. One of them traced the execution path of a uC to see what memory locations were touched when performing different tasks (logic analyzer observability was not possible in this case), and it helped them to narrow down the part of the code that was having a problem.

At one point in this company's history, they cut way back (to almost zero) the System Test group, and tried to outsource more of the software development. That bit them in the butt as many more bugs started showing up in the field, and costing them extra money in our Customer Support group and resulting in lost sales. Very soon after that there was a company-wide initiative to bring back the System Test group and make sure that each software release had adequate testing before being sent to customers.
Why would they get rid of it in the first place? It makes no sense in my head?
 
  • #13
TensorCalculus said:
Why would they get rid of it in the first place? It makes no sense in my head?
That's the sort of thing that happens on IT projects!
 
  • Wow
Likes TensorCalculus
  • #15
TensorCalculus said:
Why would they get rid of it in the first place? It makes no sense in my head?
To try to save money on engineering headcount and other related expenses. Management's idea was to outsource more of the code development to places like India and Hungary, and count on those companies to do their own System Testing. But their idea of testing was not up to the standards we had here (see jedi's comments above), and many more bugs started making it through to customers. Not a good situation.
 
  • Informative
Likes symbolipoint and TensorCalculus
  • #16
berkeman said:
To try to save money on engineering headcount and other related expenses. Management's idea was to outsource more of the code development to places like India and Hungary, and count on those companies to do their own System Testing. But their idea of testing was not up to the standards we had here (see jedi's comments above), and many more bugs started making it through to customers. Not a good situation.
Oh that makes a bit more sense now: I can see why they would have thought to do that.
I don't know about Hungarians, but there are definitely a lot of talented Indian coders: there's been multiple instances of companies claiming to be some sort of AI or new automated technology... but then they ended up just being anonymous Indians...
 
  • #17
TensorCalculus said:
multiple instances of companies claiming to be some sort of AI or new automated technology... but then they ended up just being anonymous Indians
So, it seems "Anonymous Indians" is a "sort of AI".
 
  • Haha
Likes fresh_42 and TensorCalculus
  • #18
I once heard that defect we found cost the company $20 to fix.

If found in a beta it was $200 to fix.

If found by a customer ie many customers a patch had to be coded, tested, and deployed for about $2000.

So that was the incentive to test well.
 
  • Like
  • Wow
Likes berkeman and TensorCalculus
  • #19
Concerning cutbacks, I once worked on a significant project with a satellite developer team from California. After the project, they were given awards for their excellence, and sadly disbanded and were let go.

The company gave away some of its best developers , who then went to the company’s competitors.
 
Last edited:
  • Informative
Likes symbolipoint
  • #20
jedishrfu said:
Wrt to cutbacks I once worked a major project a satellite developer team from California. At the completion of the project, they given awards for their excellence and sadly disbanded and let go.

The company gave away some its best develops that who then went to the company’s competitors.
The company's own fault... disbanding them.
 
  • #21
jedishrfu said:
an older LLM of the lineage taught a newt LLM and passed its bad behavior via coded numbers that researchers didn’t understand.
I didn't think a newt was smart enough to learn an LLM, let alone understand coded numbers ...

Bad, bad newt!
 
  • Haha
Likes TensorCalculus, berkeman and jedishrfu
  • #23
jedishrfu said:
I once heard that defect we found cost the company $20 to fix.

If found in a beta it was $200 to fix.

If found by a customer ie many customers a patch had to be coded, tested, and deployed for about $2000.

So that was the incentive to test well.
It's like the Powers of Ten for not-testing-well. :wink:
 
  • Haha
Likes TensorCalculus and jedishrfu
  • #24
jedishrfu said:
I once heard that defect we found cost the company $20 to fix.

If found in a beta it was $200 to fix.

If found by a customer ie many customers a patch had to be coded, tested, and deployed for about $2000.

So that was the incentive to test well.
It all depends who's paying. If you are clever enough you get the customer to pay for the bug fixes!
 
  • #25
PeroK said:
It all depends who's paying. If you are clever enough you get the customer to pay for the bug fixes!
I was referring to how my company estimated it internally so of course the customer pays otherwise you won’t be in business for long.
 
  • #26
PeroK said:
It all depends who's paying. If you are clever enough you get the customer to pay for the bug fixes!
Is that the norm for government digitization contracts, since
1. govt has deep pockets,
2. the actual end customer will beef at the gov't, not the coding company.

I can understand the overbudget due to complexities, but not the sub-standard product.
One more famous one was the digitization of public service employee payroll - Phoenix Pay - that resulted in underpay, overpay, no pay for a segment of employees.
Unfixable, so now the gov't of Canada is moving over to Ceridian's Dayforce.
 
  • #27
@jedishrfu can you give concrete examples for how an AI hallucinate or what that means in practice.

I have only use LLM for understanding steps in math text be it in textbooks or a journal article. I have used a LLM to write me a script on text processing of latex code for posting on here. For the math tasks, I always asked the AI to give relevant references. If it can't do a computations, it will say something like such and such expressions calculatio seems complex and then it will just say something along the line of after finishing the caluculation, I will get the conclusion I am looking for.

I would check with two or three LLM with the same question queries to compare the results. In all cases, I ask the LLM to provide references in the form of either books, online notes, or scholarly articles.

I am not an expert in the mathematical theory of LLM, nor how such learning model couple with other ML algorithms can be consider as the equivalent of making valid inferences. I only can try to make sure I can trace back to where AI goes and looks online to check for sources where it tries to derive its conclusion. This way, if it did make mistakes, I know at least what not to do.
 
  • #28
A few months ago, I was researching Julia a fairly new programming language from MIT.

ChatGPT gave me a nice summary if Julia along with some urls to tutorial sites. None of them worked they were all fake.

Later i asked for citations on a topic and have of them didnt exists. Books that were mentioned didn’t exist as did the authors.

It has improved with v4 and v5 but I'm sure given enough time I’ll spot some more stuff. I did see generated code had errors ut when pointed out the LLM fixed the issue.
 
  • #29
@jedishrfu Recently I asked the LLM to give me reference on how it understood something in a paper, and gave me a link to some online university notes. The notes existed online once upon a time, but that professor took it down, or the url has changed.
Maybe in ten years or so, AI would be decently reliable. I blame it on the part of the math community that is active on social media who has been actively promoting these LLM technologies. They keep focusing on how well these models have done in solving this years IMO problems or how well whichever one did in some higher math bench mark results. I am not sure being able to solve difficult competition level math problems are a good bench mark for an all well rounded LLM. I mean in the sense that it can give accurate cooking advice on how to make certain dishes to deriving some physical law purely from observational data, to solving some complicated chemistry problem, then giving financial advice to someone who needs financial advice in making important life changing financial decisions. All of these things I mentioned for a human engage different part of their cognitive abilities. Not sure if any LLM is there yet.
 
  • #30
@jedishrfu I was going to replied in my thread you just replied to, but I will replied here since I think @TensorCalculus would want to hear this. I am not sure if you two have heard from Microsoft, someone important and senior from there in an interview said that there is no need to learn to code anymore because AI will do it all for us. Well, we all know of that recent tea app hack, which was built via "vibe coding", according to the rumor mill on the internet. Anyways, I know someone from Microsoft who is high up on the project management totem pole. I can't say if it is at the C-suite level. This person gave me similar advice not quite at the level of what that other person said in public, but similar in spirt and messaging and thar is i don't need to actually know how to code well anymore, but it is important to know how to read it since you can always get the AI to do it for you. I was asking this person about C and assembly language programming. This person's advice was not only for assembly or C, but for coding in whichever language. Oh this person's technical background is in AI also.

I felt very uncomfortable with the implication of that person's advice. Why? My understanding is that programming itself as a subset of compute science is a skill set that, if my impression is correct needs some decent amount of practice initially to be able to do it and then some more practice after to get good at it. Kind of like swimming, riding a bike, playing an instrument, etc. I could be wrong in my analogy and please feel free to correct me. The thing is, assuming the AI and LLMs get to a point where they don't hallucinate or their chances of doing it are extremely low. There is always the possibility of one asking the AI to build some software where the way was built containd vulnerability. I am not sure just because someone being able to read various source code well can spot those vulnerabilities that allows for black hat hacking. I am also making the assumption that LLMs are trained on code writing from what is publicily made available like that freezer box for storing fish sounding name Git-Something. Either one of you could chime in if you like. I just know that relinquishing many of the code writing task to a machine, we loses something as a result. Maybe is creativity, speed, or an eye for knowing how to build something safe. Also, it is more than compare to the case of kids using calculator to do their math homework instead of learning it to do it onmmusing pencil and paper or students learning to get good at doing integrals by hand instead of using a CAS system.
 

Similar threads

  • · Replies 22 ·
Replies
22
Views
22K
  • · Replies 73 ·
3
Replies
73
Views
11K