AI and Learning: How It Impact Our Future
Table of Contents
PF Advisors: How AI Will Affect STEM — Part 3
We asked our PF Advisors, “How do you see the rise in AI affecting STEM in the lab, classroom, industry, and everyday society?” We received so many great responses we needed to split them into parts. These are the first responses for part 3. Read part 1 here and part 2 here. Enjoy!
That’s an interesting question, not just because the term “AI” covers a whole range of technologies. There have already been significant — and likely permanent — changes in industry and everyday society due to knowledge-based systems and big-data analysis.
In the lab, much of what I see is focused on image analysis: not just clinical applications (radiology images and disease diagnosis), but also pattern finding in image, genomic, and proteomic data. While there are reports of using AI to help design experiments, aside from one high-profile paper in 2009 I have not yet read about an expert system independently conceiving an experiment.
In the classroom, I have experience with AI-lite systems in place for much of the pre-calculus and calculus sequence in math (“mastery learning”). I expect similar approaches could be used in vocational settings. However, I have not seen evidence that an AI system can wholly replace an instructor.
As a summary comment, my view of AI may be retrograde, but I draw a parallel between how AI will grow to influence our social and professional lives and the way phones have permeated them. In the past, phones were designed to interface with people: headsets conformed to our faces. Now people conform to smartphones. One result is we hold smartphones differently than the old handset; another is that if your fingers are too large for the virtual keypad, you are out of luck.
That is to say, humans will likely adapt to AI interfaces as presented to us, rather than adapting AI to fit our needs. I admit this is a somewhat pessimistic and passive view of humanity.
From my vantage point, there is some confusion about “AI.” In public discourse, when people think of AI they often imagine science-fiction images of machines with human-level consciousness. The AI currently in use does not have that capability.
What we are seeing instead is the rise of machine learning — algorithms that learn and make predictions from data without being explicitly programmed. Machine learning’s rise has been enabled by several factors:
- Greater computational capabilities.
- The huge increase in data collection across virtually all areas of life (the “big data” phenomenon), including many scientific fields such as genomics, proteomics, high-energy particle physics, neuroscience, and remote sensing.
- Developments in statistics, particularly nonparametric methods — neural networks are essentially nonlinear statistical models used for function approximation.
These factors have already had major impacts in STEM research by providing insights and generating hypotheses much faster than before, and I expect that trend to continue.
In industry — specifically the pharma/biotech/healthcare sectors where I work — I’ve seen machine learning used to integrate real-world data into clinical-trial design and to enhance the ability to find molecular matches for drugs or vaccines, which can speed drug development. The COVID-19 pandemic has accelerated activity in this area within my company. I see similar developments across many other industries.
One additional area is the development of new infrastructures and markets that link service providers to customers. For a thoughtful discussion, see Michael I. Jordan’s interview on Lex Fridman’s Artificial Intelligence podcast — especially the segments at 14:49 and 23:53:
https://www.youtube.com/watch?v=EYIKy_FM9x0&feature=emb_title
First, what do we mean by the “rise of AI”?
- Do we mean the current state of things or what might happen later (sci-fi scenarios)?
- Does “AI” mean simply “smart” or something that functions in a specific technical way?
I expect AI will remain a combination of programming and hardware and will expand its functional roles relative to people. Broadly, it will:
- Fill functional gaps (be useful).
- Expand into functional areas and eventually compete with people for jobs (causing social friction).
I don’t claim to be fully up to date on all AI developments, so I offer a kind of wish list of plausible near-future capabilities and consequences.
Lab
- Sophisticated assistance with experiment workflows and publishing (niches for publishable undergraduate research).
- Finding new relationships in complex databases.
- Handling repetitive tasks that computers can run without human intervention, and deciding when to initiate those tasks.
- Improved literature and data searches; identifying previously unknown relationships between datasets (making discoveries).
- Potentially replacing some graduate-student-level work, which could change PhD training and the academic job market.
Classroom
- AI with deep subject knowledge able to discuss topics across scales (for example, molecular biology or thermodynamics).
- Increasingly capable teaching assistants that become more autonomous.
- To replace instructors, systems would need to respond fluidly and quickly to student questions; form factor (humanlike robot or not) may be less important than interaction quality.
Industry
- More production-oriented than lab systems: increased efficiency and flexibility, more user-specific manufacturing, just-in-time production at scale.
- Job replacement could change industry-government relations and the political calculus around job-creation incentives.
Everyday Society
- As AI replaces some jobs, displaced workers will face tighter competition for remaining roles unless attrition is gradual.
- New jobs will be created by the technology, perhaps better paid but not necessarily as plentiful.
- Decisions about replacing humans with AI will often come down to cost-effectiveness: “Does it make sense to build a super-competent AI instead of hiring a human?”
Political responses might aim to limit which jobs AI can replace, slow the pace of replacement so it tracks job attrition, or ensure public benefits (taxes, retraining) for affected populations. AI is flexible and will wait for opportunities, evolutionarily speaking. Mwa-ha-ha!
“I, for one, welcome (with interest) our developing relationship with our rapidly developing digital symbionts. As with the mitochondria–cell relationship, who will be which part of whom?”
To me, one of the most notable societal effects of AI will be self-driving vehicles — technology that is already far along and will have a huge social impact. AI will also start to assist and take over other routine jobs. It’s quite probable that teaching will be increasingly automated, with AI guiding student learning.
If taken to its full extent, AI teaching could move teachers into more administrative or guidance roles, perhaps overlapping with social work. Teachers will still be required for some time even if AI teaching is pursued aggressively: they will need to supervise AI systems and correct their failures. Roles will also remain for designing and studying such teaching systems, including research on how students learn. This is different from traditional hands-on instruction, and it will only happen if resources and effort are dedicated to it.
Political questions about “what should be taught” and moral questions about teaching behavior will remain. I am unsure whether AIs can or should teach moral behavior; parents will still have an important role, though many parents are working and may struggle to take that on fully.
There will be increasing strain on social structures that assume “everyone must have a job.” Conservatives may resist changing that philosophy even as available job pools shrink. Artistic, creative, and service jobs may mitigate job loss to some extent. It would be logical for people freed by automation to take better care of children — especially in areas that cannot or should not be delegated to AI — but I do not see widespread movement in that direction currently.
I’m mostly retired after one career teaching mathematics and programming at a two-year college and another in the software industry. I currently teach two classes a year in computer architecture, focused on what goes on inside a CPU and how machine instructions cause calculations — very low-level material that’s far from AI.
The only connection I see is how AI might be used to better predict branches or jumps in code. Modern CPUs use multi-stage pipelines similar to an automobile assembly line. If a branch is mispredicted, the pipeline must discard subsequent instructions and be refilled, which slows execution. CPU designers have invested heavily in branch-prediction techniques; better prediction (potentially aided by AI) improves pipeline efficiency.
I have a BS in Information Sciences from UW-Milwaukee. I’ve helped manage Physics Forums for over 22 years. I enjoy learning and discussing new scientific developments. STEM communication and policy are big interests as well. Currently a Sr. SEO Specialist at Shopify and writer at importsem.com










Leave a Reply
Want to join the discussion?Feel free to contribute!