A Crisis for Newly Minted CompSci Majors -- entry level jobs gone

  • Thread starter Thread starter jedishrfu
  • Start date Start date
Click For Summary
Fresh computer science graduates are facing a significant employment crisis, with unemployment rates between 6.1% and 7.5%, far exceeding those of other majors. The rise of AI has led to the automation of many entry-level coding jobs, while high-paying positions in machine learning are increasingly reserved for more experienced candidates. The perceived value of computer science degrees is declining, as some successful individuals are finding lucrative opportunities without formal degrees. Experts suggest that pursuing degrees in physical sciences may be more advantageous in the current job market. The demand for software testing is expected to grow due to the complexities introduced by AI, highlighting the need for adaptability in the tech industry.
  • #31
Think of it this way: the world is changing. At one time, engineers used pencil and paper, then sliderules, before advancing to computers, compact digital calculators, desktop machines, with the latest CAD software and 3D printing tools.

So now we get the AI to do the programming, describing what kind of application we want, add in the various attributes it should have, and the type of GUI display it should generate. You are coding with words, and a machine is producing a more detailed but human-readable code in whatever language we've learned in school, so that we can inspect and test the code to make sure it functions correctly.

It's still programming, but we've completed it using some powerful new tools which will free us up to do something new and exciting. We now get to become master testers who find bugs in AI-generated code and then collaborate with an AI expert to fix the AI tool.

In a sense, it's like Spock from Star Trek when he talks to the computer, even though he may already know the answer or how to get it.

Only time will tell how the industry will evolve, who will lose their jobs, and who will gain the skills for a new role. Many issues remain unresolved, including how to manage legacy code and whether we need more than one programming language. So is it FORTRAN, COBOL, Java, or Python? Each has its pros and cons.

One language to rule them all.
 
Technology news on Phys.org
  • #32
@jedishrfu if we are going to use star trek as an example reference to a plausible future scenario concerning AI, actually star trek computers are not sentient AIs...but i digress. Anyways, in star trek, people still know how to code and they do it all the time. They don't just let the computer to do all the work for them. Just look at voyager and Deep space 9. In both series, engineers and science officer can come up with algorithms and do intricate coding task since the are dealing with station or star ship system components. They have to know how to do it well because they know if it has flaws, very bad and fatal things can happen. Those are very different then what is happening now. We can have people designing software system without any kind of computer science or engineering training. What if somebody gets a LLM to design some innocent web app that is part of say a bank. But this person doesn't know programming enough to detect vulnerability because of what he did not ask the LLM to include. The people at the bank who this person works with approves it. Later on, this web apps vulnerabilities allow hackers to gain access to customer data. Oh I forgot to mention that the people who worked with the person wmthat vibe code their way into making this web app takes it for granted that the LLM will take care of safety issues, etc. I mean there is a lot of trust based on implicit assumptions.

Star Trek computers only does what you ask it to do, nothing more, nothing less. We are discounting Data or Spock or the Binars or any of the intelligent thinking androids which appeared in various series.
 
  • #33
This discussion reminds me of the role of robots in the industrial world of manufacturing. Very few of us can afford a hand-built car. Did that cost jobs during the last century? Sure, and not a few. But what would have been the alternative?

And if we look closer into the industrial history, we will certainly find even more examples of machines replacing jobs, and I think so will AI. What they cannot replace is in my opinion the understanding aspect. AIs are stupid in a way. They gather - and as @jedishrfu pointed out in the example about Julia - even invent evidence. And so are the robots in the auto industry. They do not know why they are welding parts.
 
  • #34
@elias001 you are missing my point. Its in the way Spock interfaced with the computer. He talked to it.

Its not unlike ChatGPT where you write to it in a conversational way. This is exactly where we are headed. If properly trained youll be able to use your native language to talk to the computer.

Technology continues to evolve. The iPhone is a great example of multiple devices becoming one device. Programming languages follow a similar arc with near or at the top.

Engineers build better tools to help build better tools. Jobs will change but humanity will adapt or die out trying.
 
  • #35
@jedishrfu but if we relied on these LLM too much, will less people have the motivation to learn to code?
 
  • #36
elias001 said:
@jedishrfu I was going to replied in my thread you just replied to, but I will replied here since I think @TensorCalculus would want to hear this. I am not sure if you two have heard from Microsoft, someone important and senior from there in an interview said that there is no need to learn to code anymore because AI will do it all for us. Well, we all know of that recent tea app hack, which was built via "vibe coding", according to the rumor mill on the internet. Anyways, I know someone from Microsoft who is high up on the project management totem pole. I can't say if it is at the C-suite level. This person gave me similar advice not quite at the level of what that other person said in public, but similar in spirt and messaging and thar is i don't need to actually know how to code well anymore, but it is important to know how to read it since you can always get the AI to do it for you. I was asking this person about C and assembly language programming. This person's advice was not only for assembly or C, but for coding in whichever language. Oh this person's technical background is in AI also.

I felt very uncomfortable with the implication of that person's advice. Why? My understanding is that programming itself as a subset of compute science is a skill set that, if my impression is correct needs some decent amount of practice initially to be able to do it and then some more practice after to get good at it. Kind of like swimming, riding a bike, playing an instrument, etc. I could be wrong in my analogy and please feel free to correct me. The thing is, assuming the AI and LLMs get to a point where they don't hallucinate or their chances of doing it are extremely low. There is always the possibility of one asking the AI to build some software where the way was built containd vulnerability. I am not sure just because someone being able to read various source code well can spot those vulnerabilities that allows for black hat hacking. I am also making the assumption that LLMs are trained on code writing from what is publicily made available like that freezer box for storing fish sounding name Git-Something. Either one of you could chime in if you like. I just know that relinquishing many of the code writing task to a machine, we loses something as a result. Maybe is creativity, speed, or an eye for knowing how to build something safe. Also, it is more than compare to the case of kids using calculator to do their math homework instead of learning it to do it onmmusing pencil and paper or students learning to get good at doing integrals by hand instead of using a CAS system.
It's scary yes, but all that it will mean is that learning to code in the way that we will know it will become redundant.
For now, AI isn't perfect at all. Even "vibecoding" requires some level of coding knowledge: to be able to know what to prompt the AI for, etc etc. And with AI, people will focus on other things: as said before, testing, or developing the AIs themselves. Or working in prompt science... I could name more. My dad is a developer and is in fact encouraged to use AI: but if the AI could do everything on it's own, why would they employ him? There still needs to be a human debugging, testing, navigating the project. Especially when the project is a bigger one.
The problem right now is for those who have been taught coding in a way that didn't anticipate AI: they're losing jobs because they've learnt skills that AI can now do. Losing jobs in factories to automation hinders jobs, but also opens up more.
elias001 said:
@jedishrfu but if we relied on these LLM too much, will less people have the motivation to learn to code?
Probably. But the skill of knowing how to write basic code has also become equally less valuable as a result of LLMs.
 
  • #37
@TensorCalculus Hackers from all three sides still has to know very basic coding skills, as well as people working with embedded systems? But do AI write safer code than a human? I am assuming it is not 100% safe. So hackers of the future just need to know how a specific AI works in terms of its code base and finding vulnerabilities that way? Also what is priority science? Please I don't want to hear about spending foutlrvyears in university studying about how to ask AI questions. I mean imagine there is a degree just for that.
 
  • #38
Think of AI as software. You have to deal with software and how to use it in your studying. It may take years to be really good in using a specific sophisticated software. Same with AI. (for now)
 
  • #39
elias001 said:
@TensorCalculus Hackers from all three sides still has to know very basic coding skills
True. Unethical hackers will have problems trying to get most LLMs to write malicious code too. But they will certainly be assisted by AI.
elias001 said:
But do AI write safer code than a human? I am assuming it is not 100% safe. So hackers of the future just need to know how a specific AI works in terms of its code base and finding vulnerabilities that way?
Of course there will always be vulnerabilities in code. And you're right: we will still need ethical hackers and testers to make sure these vulnerabilities are minimal. Though much of hacking isn't even about coding, it's about exploiting human weaknesses too. The most common type of cyberattack is phishing (source), which takes advantage of tricking people themselves into giving away info, not exploiting vulnerabilities through code.
As to whether AI writes "safer" code, I can imagine it varies on a case-to-case basis.
elias001 said:
Also what is priority science? Please I don't want to hear about spending foutlrvyears in university studying about how to ask AI questions. I mean imagine there is a degree just for that.
Prompt science is the study of how prompts change the outputs of AI and which prompts are better... worse... etc etc. If I shove a whole ton of unnecessary info into my prompt then it will of course get a lower quality output from the LLM than if I had a concise prompt that had exactly the information the AI needed.
 
  • #40
@TensorCalculus hackers don't use align AI. Look up white rabbit neo, wormGPT, hackerGPT or ask grok or gemini about a list of abliterated/uncensored LLMs. WhiteRabbitNeo and HackerGpt try to be humorous with their interactions. Do you see any of the LLM as your companions? I don't know why people call them that.
 
  • #41
elias001 said:
@TensorCalculus hackers don't use align AI. Look up white rabbit neo, wormGPT, hackerGPT or ask grok or gemini about a list of abliterated/uncensored LLMs. WhiteRabbitNeo and HackerGpt try to be humorous with their interactions. Do you see any of the LLM as your companions? I don't know why people call them that.
I know uncensored ones exist out there, hence the "most" :woot:
No, I don't. I'm not a huge LLM fan, though I can admit to using them from time to time. I prefer coding by myself simply because I find it fun.
 
  • #42
@TensorCalculus your profile state that you are from England. Is everything ok there? i heard there were massive political protests there. By the way, a lot of dystopian films I have seen all came from UK.
 
  • #43
elias001 said:
@TensorCalculus your profile state that you are from England. Is everything ok there? i heard there were massive political protests there. By the way, a lot of dystopian films I have seen all came from UK.
Bit off topic...
We're fine here. I live in a small town near one of the most well-off cities in England, it's perfectly safe here. When did you hear that? There were protests a while ago about immigration but that was a short-lived, one-off thing...
If you want to continue this conversation though, since it's not too related to the thread, can I suggest you move to DMs?
 
  • #44
Well, a major tech company that someone I know works in [purposely being vague] just made a few testing/checking based roles redundant because of AI. No one is going to work that job anymore across the whole company.
It's getting worse by the day...
 
  • #45
  • Like
Likes TensorCalculus
  • #46
I've seen the second one: but not the first, thanks!
I don't think anyone can predict when/if we will ever get an AI bubble or not though...
 
  • #47
@TensorCalculus I have a feeling Altman knows something is up and he is trying to give everyone a hint ahead of time. By the way, two of the original AI researchers at Open AI that worked with Altman, i think they were even still at open AI when Altman was ousted after Microsoft agree to funded them and everyone at Open AI all threaten to quit. Two of those AI researchers went to my university and I knew one of them as he was finishing his math P.h.D and the other one i spoke to slightly before he went off and worked with Hinton. Some people after their math ph.d pivoted to machine learning. Have to find ways to pay the bills somehow.
 
  • Wow
Likes TensorCalculus
  • #48
@TensorCalculus I just saw this: on my feed. It sounds like is economically safer to be a mechanical/electrical/materials engineer. I mean a lot of blue collar jobs have to use technology and keep up with new technology development. Engineers play a big role in that.
 
  • #49
@jedishrfu i think there is something you have not brought up in your original post and someone like myself and others are wondering the following questions. Before ChatGPT arrived on the scene, there was deep learning, big data, artificial neural networks. My understanding and to the best of my memory, these AI/AI related conceps that eventually had their own courses in various computer science departments, people can only learn about them via an university CS department. While all that time, Software engineering has become a distinct specialization within computer science. But the so called "Data scientist/engineer" has not been formally recognized as something as distinct as say a software engineer. Nowadays AI/machine learning/Data scientists are coming into its own. One can learn, study and consider as its own separate career path not only by starting and going through a CS department, but through say the math/stats/engineering department. So for software engineers, is it hard for them to do additional training in data science and AI by taking courses in AI and data science at a typical university. I am asking about this is because AI/data science/machine learning/artifical neural networks has software engineering and various computer science as its foundations. I am basically trying to ask that it might not be all doom and gloom.
 
  • #50
hello sorry for my late reply! My silly brain thought I'd replied to this but I hadn't :cry:
elias001 said:
@TensorCalculus I have a feeling Altman knows something is up and he is trying to give everyone a hint ahead of time. By the way, two of the original AI researchers at Open AI that worked with Altman, i think they were even still at open AI when Altman was ousted after Microsoft agree to funded them and everyone at Open AI all threaten to quit. Two of those AI researchers went to my university and I knew one of them as he was finishing his math P.h.D and the other one i spoke to slightly before he went off and worked with Hinton. Some people after their math ph.d pivoted to machine learning. Have to find ways to pay the bills somehow.
I mean you can be an AI researcher, there are many of them out there, but you still can't predict the future... though that does make me trust the video more.
Very cool that you spoke to them!
elias001 said:
@TensorCalculus I just saw this: on my feed. It sounds like is economically safer to be a mechanical/electrical/materials engineer. I mean a lot of blue collar jobs have to use technology and keep up with new technology development. Engineers play a big role in that.

Interesting! I think we will have to wait and see how the value of the skills that CS majors are taught shifts. I am sure that the curriculum will try it's best to adapt to the changes. Why do you say mechanical/electrical/materials engineer specifically?
 
  • #51
@TensorCalculus, @jedishrfu, @robphy i don't know if you folks have seen the two youtube videos below. It has to do with something call "prompt engineering" and AI as being a compiler. The second one by the same guy is talking about educational usage for AI. The second video is kind of vague while the first one, I am sure is important behind his massive technical word salad with python as its main ingredient. I don't have the technical expertise to be the equivalent of a food critic. Wait, even if I do, I would not want to sample any of it, is in python=super gross.



 
  • #52
Dev vs. Tester: Guess who’s got the edge in this epic meme showdown? (codemug)
 
  • #54
1) If you want to get a job done efficiently with computers, the first question is: "What is the job?". It is not which computer language, technique, or fad of the year to use. A specialist in the application has a real advantage over a specialist in computer science. Of course, both are important, but the application field often remains long after the current computer fad is long gone.
2) A lot of the advanced computer techniques are made available in program libraries and work environments for anyone to use.
 
  • #55
Re: the miserable employment situation in IT right now

A painful and probably necessary course correction (social scientists call it a "disruption") is going on right now in software development. IMO it is only partly due to AI. There has been a glut of people with newly minted CS degrees for some time now in the US; they always had issues getting jobs due to the fact that a large corporation such as Amazon can hire a person from a foreign country via H1-B visa. The H1-B folks had to remain employed continuously for 5 years to remain in the country and get a green card, and many were thus willing to work for far lower wages than most US citizens expected. Amazon currently employs about 10,000 people via H1-B visas, and furthermore, those visa people will put up with a lot more crap than someone who can afford to change jobs. The party line is that these corporations cannot find US citizens with appropriate skills. Personally, I don't think they tried; I think it's all about lower salaries and leverage over their workforce.

But the big corporations mainly recruited from the ivy league and high reputation schools; if you got a degree from a perfectly good tech school or regional university, you might not ever even have gotten a shot at applying to the big tech companies.

And now, newly graduated CS students are well and truely screwed. Further, companies are in a panic due to AI and the Trump admin's wildly unpredictable policies, and IMO IT management always suffered from extreme short-sightedness anyway. So right now, the only IT people being hired by corporations are steeped in LLM skills.

Re: LLM's. Those have scraped and compiled almost everything they could find on the WWW, including the crap as well as the cream, so a certain portion of what they regurgitate (as glorified search engines with the ability to summarize) is going to be wrong just because the sources they have scraped (mostly illegally) are wrong. Further, the LLM's bots have ceased honoring robots.txt files and in many recent cases, they scrape so aggressively that their finding a new site to scrape leaves that site in a state as if it were experiencing a deliberate denial of service attack. I have experienced this myself, and desperately casting about, I was able to "save" my website by placing it's DNS behind a Cloudflare screen. Cloudflare parses all incoming requests and discards all those believed to be coming from misbehaving bots and other bad actors.

The market for IT professionals WILL rebound, because it has to, but I am not sure how long it will be. I have advised my young friends who are currently unable to find work to try using Freelancer.com, a place where normal people (not necessarily huge corporations) can find and hire programmers, sysadmins, project managers and many other roles as needed. I have found Freelancer very useful; those who hire leave reviews of those hired, and those hired leave reviews of the hirers. So it is reputation based, and frankly, the profession could benefit greatly from a two-way reputation based hiring system instead of the top-heavy management abusive situation in which programmers have largely labored in recent years. At least, this is how I view things. Of course, Freelancer skims off about 15% for its matchmaking, and workers do not receive any benefits such as health insurance, so they are purely independent contractors, and US law has never favored their situation financially. But 15% is actually reasonable. Most large companies require all independent contractors (and they hired a LOT of them in the past) to come through a few contract houses, and those contracting companies (acting mainly as payroll for the workers and sheltering the company from liability) generally skimmed off much greater percentages, sometimes 50% or more. I contracted for Wyeth for a couple of years back in the day, being paid $110 per hour, and I think Wyeth was paying them about twice that for my time.
 
Last edited:
  • Informative
  • Like
Likes bhobba and symbolipoint
  • #56
#55, a very long post, but interesting, and to some extent informative.

I ask, what is the meaning for the abbreviation "LLM"?
 
  • #57
LLM, Large Language Model, an AI machine learning system that is trained by "scraping" (fetching, parsing, storing) publicly available websites and databases, and sometimes, illegally obtained information. It can parse natural language, possibly also understand human speech, and tries to "understand" and represent the world symbolically based on its " reading". They are used in place of traditional search engines based on the "knowledge" they have been trained on, and are beginning to be used as assistants for other purposes. I have begun to encounter annoying phone menu systems (Hyatt has one, for example) that seem to be AI based but are not ready for primetime in case of any out of the ordinary situation. It is difficult to get past them to a human being.

I have heard for years now that resumes submitted to many large corporations are screened by AI and that system can be gamed, often must be gamed, because if you can't get the resume past the LLM, no human being will ever see it and there is thus no chance of being interviewed.
 
Last edited:
  • Informative
Likes symbolipoint
  • #58
Here is an interesting article about Elon Musk gambling on LLMs being the future:

https://www.wsj.com/tech/elon-musk-...ower-dec4c70d?mod=djemCybersecruityPro&tpl=cs

Note that LLMs run on server farms of PCs whose powerful graphics processors are used in clustered form to create a supercomputer. Yes, NVidia chips, not the x86 or maybe ARM based processors once considered to be the heart of a PC. These server farms consume enormous power and are straining electrical companies. Public utility rates for everyone are rising in order build power plants to be able provide enough power to installations for these AI server farms. The early efforts towards server farms were sometimes called COWs (Clusters of Workstations) or NOWs (Networks of Workstations)--like SETI--and they use sophisticated distributing computing algorithms to dole the computational work out to possibly millions of processors. This work progressed for years before OpenAI competitively forced all the major search engines to convert to having AI capability.

So yes, oddly, these issues are related peripherally to the crisis in software jobs.

This is one reason people are reconsidering nuclear power options, as described in https://citizendium.org/wiki/Nuclear_power_reconsidered

BTW, side note, you can turn off the LLM usage in Google searches by placing -ai after the search terms. This actually skips the AI summary and prevents the activation of an LLM for it, using less power. But Google already has its search engine running on enormous server farms and was already using AI behind the scenes to minimize power consumption by its server farms. Google farms use solar arrays and close down after the sun goes down. So when you Google something after dark, the search is likely serviced by a farm on the other side of the world where there is sunlight. Most LLM companies are not at that level of sophistication yet; their farms just run 24x7 and suck power from the public grid constantly.
 
Last edited:
  • Informative
  • Like
Likes bhobba and symbolipoint
  • #59
Perhaps a double major in CS and AI + Data Science?

From my 30 years of experience as a computer programmer, these things go up and down. Sometimes all you have to do is be able to spell computer; other times, even with 30 years of experience, it's hard.

Thanks
Bill
 
  • Like
  • Wow
Likes russ_watters, symbolipoint and harborsparrow

Similar threads

  • · Replies 22 ·
Replies
22
Views
22K
  • · Replies 73 ·
3
Replies
73
Views
11K