artificial intelligence

How the Rise of AI Is Transforming STEM: Labs to Classrooms

📖Read Time: 6 minutes
📊Readability: Advanced (Technical knowledge needed)
🔖Core Topics: ai, systems, human, learning, jobs

We asked our Physics Forums (PF) Advisors: “How do you see the rise of AI affecting STEM in the lab, classroom, industry, and everyday society?” We received many thoughtful replies and are publishing them in parts — here are the first responses. Enjoy!

Community responses

Ranger Mike

If past trend forecasting is any guide, I expect lots of investment in AI but relatively little useful product for some time. I remember predictions from the early 1980s — “lights-out” factories and a service-only U.S. economy — that did not come true. Deming-style quality management and the dominance of SPC were attractive ideas in business school but did not become universal realities.

AI, in my opinion, is often the latest buzzword — replacing earlier fads like “robotics,” “going green,” or “sustainability.” Think of the movie The Graduate: at the time “plastics” was the big buzzword; today that word has very different connotations.

In industry, AI can produce viable products when the market need is correctly identified. For example, I see AI being effective for long-distance trucking, but much less so for short-haul operations where too many variables make human experience and decision-making important.

I also encounter AI in manufacturing: CAD models that automatically generate tool code and machine-driven inspection against CAD nominal. That capability exists today, but it took decades to develop.

jack action

I view AI mostly as a powerful extension of statistics. It looks especially promising for research tasks such as discovering new molecules or medical treatments: AI will identify patterns and suggest the most promising experiments to run first, much like statistical methods guide scientific exploration. AI should be faster and require fewer resources.

However, AI also worries me because, like statistics, it always produces an answer — even when the premises are flawed. People may analyze complex systems with many subjective assumptions and then accept an AI-produced result uncritically. If findings become treated as “must be” rather than “could be,” and then are turned into rules or laws, that becomes dangerous.

Trusting AI for well-defined, simple systems (for example, on a production line) is reasonable. But if we begin to trust AI without human oversight, we must reevaluate responsibility and liability.

Dr Transport

I have mixed feelings about AI. It can be very useful in STEM education: teachers could use AI to tailor instruction for each student, potentially improving learning outcomes. Marketing already uses AI to target ads effectively (think Amazon or Google).

On the other hand, AI and machine learning are sometimes treated as buzzwords to secure funding for marginal research. One former boss insisted on GPU-equipped computers and a paragraph in every proposal about potential machine learning use — a classic buzzword chase.

I’m not an AI expert; I have read some papers but often struggled with clarity and reproducibility. Too often, I saw statistics applied in only a half-hearted way to support dubious claims. As Mark Twain supposedly said: “Lies, damned lies, and statistics.”

anorlunda

I think AI will arrive slowly in conventional education. One promising idea is the AI personal tutor: an AI that remembers every student answer, analyzes classroom video to assess attention and confusion, and diagnoses misconceptions.

The AI could estimate which concepts a student has mastered, missed, or misunderstood. It could observe homework, select targeted remedial lessons (written, video, or practice problems), and replay relevant clips from human teachers or alternative instructors the student responds to better.

An effective AI tutor aims to give a student near-100% comprehension of lesson N before moving on to lesson N+1. Ideally it would report progress to teachers and parents, and be affordable or open-source so it benefits all students — while avoiding exploitative commercialization, malware, or political manipulation. All technology can be used for good or ill.

STEMucator

Here are several concrete ways AI could affect STEM and daily life:

  • Automatic cashiers. Self-service registers will continue to improve; soon machines may scan and process purchases with minimal human intervention.
  • Automated lab experimentation. Machines will execute lab procedures (for example, controlling quantities and mixing in chemistry) with high repeatability.
  • AI instructors. Classroom formats may change: intelligent virtual assistants could provide instruction and individualized pacing. This could increase remote learning and reduce reliance on traditional classrooms.
  • Automated assembly line workers. Robots and AI systems will take over many factory tasks, further automating production.
  • Self-driving vehicles. Autonomous systems for cars, trucks, and even aircraft already exist in limited forms (autopilot, experimental self-driving cars). Full autonomy could remove the need for drivers in many contexts.

BillTre

First, clarify what we mean by “the rise of AI”: the current state of tools, or a future, more advanced scenario? AI will remain a combination of programming and hardware and will expand functionally in stages:

  • Fill useful functional voids.
  • Expand into areas that may eventually compete with human jobs, causing social friction.

Below I outline likely effects by sector.

Lab

  • AI will assist with experiment execution, data collection, and publishing tasks.
  • It will discover relationships in large databases and take over repetitive tasks that computers can run unattended.
  • AI may plan experiments and perform sophisticated literature and data searches, possibly identifying previously unknown relationships.
  • Over time, some tasks done by graduate students may be automated, which could affect how many PhDs are trained or employed.

Classroom

  • AI systems will develop deeper domain knowledge and become more effective teaching assistants.
  • To replace human instructors, these systems must respond quickly and adaptively; personality and interaction quality also matter.

Industry

  • Expect increased efficiency and flexible, user-specific production (just-in-time manufacturing maximized).
  • Job displacement will alter industry–government relationships: political support for business incentives often relies on job creation, and that equation changes if jobs disappear.

Everyday society

  • Job displacement will cause social tension unless managed via attrition, retraining, or economic policy.
  • New jobs will be created, but they may not match the scale or distribution of lost jobs.
  • Political responses might include limiting AI’s replacement of certain jobs, pacing automation to match job attrition, or ensuring societal benefits (e.g., taxation, training programs).

bhobba

AI is already disruptive and will be even more so. Driverless cars illustrate this: perfected autonomy could eliminate many driving-related jobs (truck, taxi, ride-share drivers), reduce parking revenue, change traffic enforcement roles, and remove the stress of commuting.

One interesting application I noticed is AI-based upscaling in consumer televisions. The first generation of 8K upscaling used machine learning; more recent models use deep learning for improved results. Companies are also experimenting with AI-assisted encoding/decoding workflows to make high-resolution streaming viable at realistic bitrates.

While some questions remain — for example, whether most viewers can perceive a difference between 4K and 8K at normal viewing distances — AI-enabled downscaling of 8K to 4K can yield superior-looking 4K images. Consumer hardware will continue to improve rapidly.

Andy Resnick

“AI” covers a wide range of techniques. Knowledge-based systems and big-data analysis have already changed industry and everyday life. In the lab, image analysis (clinical imaging, pattern discovery in genomic/proteomic datasets) is a prominent AI application. Aside from a few headline papers, I haven’t seen expert systems regularly conceiving entire experiments.

In classrooms, I’ve seen “AI-lite” mastery learning systems for pre-calculus and calculus. I expect similar approaches for vocational training, but I have not seen evidence that AI can wholly replace an instructor.

My general concern is that humans will adapt to AI interfaces as presented, rather than designing AI systems to fit human needs — similar to how smartphones have reshaped how we interact rather than being custom-shaped to our faces.

neilparker62

My main real-world encounter with advanced AI was AlphaZero beating Stockfish in chess — fascinating and revealing about how AI can discover strong strategies. In the classroom, useful low-level AI tools include online graphing utilities and Wolfram Alpha. For now, such tools are helpful rather than replacements for teachers.

jfizzix

I’m concerned about AI-assisted deception (deepfakes). As tools become widely available, misinformation campaigns will become ever more sophisticated. It’s important that we develop defenses — possibly using AI as well — to prevent widespread cynicism, paranoia, and societal breakdown.

Comment thread

Read part 2!

1 reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply