How AI Is Changing STEM Labs, Classrooms & Industry
We asked our PF Advisors: āHow do you see the rise in A.I. affecting STEM in the lab, classroom, industry, or in everyday society?ā We received so many great responses that we split them into parts. This is Part 2. Read Part 1 here. Enjoy!
Table of Contents
gleem ā overview
AI: Maybe not quite ready for prime time, but it is coming.
I have no particular expertise with AI, but I follow developments in AI and robotics regularly, checking for significant advances roughly every week. Many occupations are ripe for substantial impact from AIāespecially those based on executing standard procedures and those that do not require delicate physical manipulations under varying conditions. Robots belong in this discussion, since even when they do not implement AI software locally they can and will be supervised or coordinated by AI systems.
Before I continue, I recommend the following informative DARPA overview of AI development and capabilities: https://www.darpa.mil/about-us/darpa-perspective-on-ai
Technical advances
The experts are divided on the magnitude of impacts on technology and society in general, particularly with regard to AGI. I do not believe AGI is likely within the next ten to twenty years. Most near-term impact will be to take over more of the plethora of rules-based tasks. Much AI development is still carried out with traditional computer hardware and software, which limits processing complexity.
However, the introduction of AI-dedicated processorsāsuch as Intel’s NNP (Neural Network Processor), announced late last yearāwill affect AI implementation significantly. Some claim the effective complexity (related to network interconnectivity like synapses) of AI systems is increasing by roughly an order of magnitude each year. https://www.nextplatform.com/2019/11/13/intel-throws-down-ai-gauntlet-with-neural-network-chips/
MIT has also developed an artificial-synapse chip that mimics neuronal synapses, reducing computation and memory requirements for certain AI workloads. If these trends continue, we could see AI systems approaching human-like complexity sooner than some predictions suggest. http://news.mit.edu/2020/thousands-artificial-brain-synapses-single-chip-0608
Limitations and public acceptance
Let me make some general remarks about issues that have delayed extensive implementation. AI has been criticized for:
- being task-specific (performing only a single task and losing that ability when retrained for another),
- lacking context sensitivity in language applications,
- inadvertent biases introduced by biased training data,
- high power consumption, and
- opacityādifficulty in determining how an AI arrived at its conclusions.
These problems are being addressed, but perhaps the biggest obstacle is public acceptance, driven by concerns about bias, liability, misuse, and privacy. Privacy may be the prime concern for most people.
Impact on STEM data analysis and jobs
In STEM, AI already affects data analysis. Modern experiments and instruments generate massive data volumes that humans cannot analyze quickly enough. AI has helped with areas such as astronomical data and the LHC’s backlog of data analysis. This accelerates scientific discovery but may reduce the need for human analysts (for example, some grad-student tasks).
Some signals, like frequency spectra, may still be best interpreted by humans, while other domains (e.g., reflected microwave spectral signatures) are now being analyzed by AI better than humans can. Coding itself is a major IT job; AI that writes programs or discovers algorithms is beginning to produce usableāand sometimes surprisingāresults.
COVID-19 and business incentives
COVID-19 has incentivized businesses to reduce dependence on humans. One of the largest expenses and management problems for companies is human resources. Although new jobs may be created by AI, many of those roles could themselves be automated. For example, the FDIC proposed a revamped quarterly reporting system to replace manual data-entry bottlenecks exposed by COVID-19āAI could automate data input and reduce delays. https://www.sciencedaily.com/releases/2020/05/200504150220.htm
A report cited that each robot can replace about 3.3 workers in some contexts, and many bureaucratic document-processing roles are well suited to automation. We might finally move closer to a truly paperless workflow in some sectors.
Healthcare and routine communications
I previously thought healthcare workers would be among the least affected by AI, but COVID-19 may accelerate telemedicine, robotic delivery of supplies and medications, and AI-driven scheduling/triage systems that reduce patient-provider contact.
Routine communications via AI (think Alexa and similar assistants) will continue to improve. I reviewed over 900 job classifications on a Department of Labor site (https://www.careeronestop.org/Toolkit/Careers/careers-largest-employment.aspx?currentpage=1) and identified roughly 40 million jobs (about 25% of full-time workers) that could be impacted to some degreeāmany in communication-heavy occupations. Cash exchange is disappearing; Amazon is experimenting with unattended stores, and large retailers are testing automated stocking and floor-cleaning robots. Many existing automated systems may be supervised by AI, with humans remaining the last line of defense for exceptions.
Broader socio-technical concerns
Historically, mechanical automation and computerization affected many blue-collar and low-end white-collar jobs. A Brookings report shows that AI will likely have substantial effects on higher-end white-collar work as well, though there is significant uncertainty in predictions. Brookings report
Certain implementations have been controversial. Microsoft’s experimental chatbot Tay, which was supposed to learn from the web, was easily manipulated by mischievous people. Microsoft also replaced 50 editors with AI to select featured articles; the system mistakenly mixed up photographs of band members (Little Mix), creating a public backlash. Facial-recognition systems have well-known issues with identifying people of colorāproblems that I expect will be addressed over time.
AI may not appear as human-like robots, but it will be pervasive. AI will not be 100% reliableāneither are humansābut its speed and reach will make it a powerful tool in business, science, and conflict. I remain skeptical about controlling nefarious AI applications: if an AI gives an advantage, developers may deploy it despite legal or ethical constraints. Illegal activities like hacking continue despite laws.
Astronuc ā perspective
That’s an interesting question. It helps to understand what we mean by AI. Useful overviews include: Accenture, IBM AI, and IBM LinuxONE.
PNNL uses AI across large and small applications; data analytics or “big data” is one key area. AI can analyze large datasets, but results are only as good as the data and the rules/engine used.
AI is useful for analyzing networks and systems, especially when predictive (foresightful, anticipatory, or insightful). If a prediction is wrong, a system may become unstable and cause damage. In benign cases that damage is trivial, but when injuries or deaths could result, the consequences are catastrophic.
Large tech companies use AI to enhance user experience, though I consider much of that manipulation rather than pure enhancement.
In science and engineering, AI helps with complex multi-variable problems. For example, optimizing compositions for complex alloys (stainless steels with FeāCrāNiāMoāMnā(C,N) systems) involves many interacting elements and minor impurities. Computational chemistry tools (e.g., CALPHAD) and complementary software model thermophysical and mechanical properties, corrosion, creep, and more. These problems scale quickly in complexity. Computational chemistry
Simulating alloys in radiation environments (e.g., nuclear reactors) adds further complexity: neutron flux causes atomic displacements, transmutation, and radiation-induced chemistry at the atomic level. When used correctly, AI can be beneficial; when misused, it can be dangerous.
Examples of misuse include AI-driven misinformation or inappropriate health recommendations. Proper AI use requires accurate, factual inputs. In the end, AI is a tool whose impact depends on its users’ motivations.
russ_watters ā perspective
I have a different view because I find “AI” poorly defined and often portrayed like science fiction, which can overstate its value. I find its mundanity to be its strength. The Star Trek character Data illustrates the problem: a machine can exceed humans in many ways but still lack basic human emotion and irrationality. If AI’s goal is to pass for human, I ask “why bother?”
The “AI effect” labels any unsolved problem as AI. Problems once called AI (handwriting recognition, speech recognition) are now just tools. The important question is not whether a machine sounds human, but whether it performs useful functionsāquickly and accurately.
AI becomes most impactful when it is ubiquitous and largely invisibleāembedded in devices and services where you don’t expect it. Examples include:
- Thermostats that learn preferences and optimize energy use.
- Cars that adapt shift behavior to match the driver’s style.
- TV/DVR systems that recommend and record shows you didn’t know you wanted.
- Smart refrigerators that reorder staples you ran out of.
- Health signals inferred from behavioral changes (e.g., lack of movement) that suggest illness.
- Social platforms that infer preferences and surface tailored ads or content.
Some applications feel intrusive, but the upside of the “internet of things” and pervasive intelligence is significant and often underappreciated.
jambaugh ā perspective
I think the recent media hype around AI is overblown. We are far from systems that match human conceptual understanding. Still, neural-net models have made practical advances for pattern recognition and classification.
In education, AI has potential for automated learning, but current trends too often force learners to fit computerized instruction instead of tailoring instruction to learners. AI research should address questions like how to automate a teacher’s ability to diagnose why a student made a particular mistake and how to adapt teaching to correct conceptual misunderstandings.
Neural networksāeven recurrent onesāare deterministic once trained. Their outputs can be encoded as direct algorithms. Training can be automated, but the resulting “code” is often opaque to the programmer; unexpected negative consequences can arise if we over-trust hidden algorithms.
We are likely still a few paradigm shifts away from the kind of AI portrayed in popular media. Voice assistants like Siri and Alexa show that current systems encode aggregate behavior on centralized servers and cannot fully adapt to individual users beyond a limited set of customization options. My prediction is a future of modest disappointment in AI’s promise over the next decades, until another paradigm shift occurs.
I have a BS in Information Sciences from UW-Milwaukee. I’ve helped manage Physics Forums for over 22 years. I enjoy learning and discussing new scientific developments. STEM communication and policy are big interests as well. Currently a Sr. SEO Specialist at Shopify and writer at importsem.com










Leave a Reply
Want to join the discussion?Feel free to contribute!