We asked our PF Advisors “How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?”. We got so many great responses we need to split them into parts, here are the first several. This is part 3. Read part 1 here and part 2 here. Enjoy!
That’s an interesting question, not just because the term ‘A.I.’ includes a whole range of technologies. There has already been significant and most likely permanent changes in industry and everyday society due to ‘knowledge based systems’ and ‘big data analysis’.
A lot of what I see in the lab is restricted to something like image analysis: not just clinical applications (radiology charts and disease diagnosis) but also in terms of finding patterns in (image, genomic, proteomic,…) data. And, while there are reports of using A.I. to help design experiments, aside from one high-profile splashy paper in 2009, I have yet to read about an expert system conceiving of an experiment.
In the classroom, I have experience with A.I.-lite systems in place for a lot of the pre-calc and calculus sequence in math (“Mastery learning”) and I expect that similar approaches could be used in vocational settings. However, I have not seen any evidence that an A.I. system could be used to wholly replace an instructor.
As a summary comment, my view of A.I. may be retrograde, but I draw a parallel between how A.I. will grow to influence our social/professional life similar to the way phones have grown to permeate our social/professional life. For example, I note that in the past, phones were designed to interface with people: the shape of the headset conformed to our face. Now, people have to conform to the phone: smartphones are flat, quite unlike our faces. One result of this is that we hold smartphones quite differently than ye olde handset. Another: if your fingers are too large for the virtual keypad, sucks for you.
That is to say, humans will adapt to A.I. interfaces as presented to us, as opposed to adapting A.I. to fit our needs. I agree that this is quite a pessimistic and passive view of humanity.
From my advantage point, I see some confusion regarding “A.I.” In the public discourse, when people think of A.I., the first images that come to mind are those established in science fiction where computers or machines have human-level consciousness. The A.I. that is currently in place certainly does not have that capability.
What we are seeing instead is the rise of machine learning — essentially the development of algorithms to learn and make predictions from data without being explicitly programmed. The rise of machine learning has been made possible due to several factors:
1. Greater computational capabilities.
2. The corresponding rise in the collection of data in virtual all areas of life (the so-called phenomena of “big data”), including in many areas of science (including, among others, genomics, proteomics, high-energy particle physics, neuroscience, remote sensing, etc.)
3. Developments in statistics, in particular developments in non-parametric statistics — one can consider neural networks as essentially non-linear statistical models that involve function approximation.
The rise in the above have already had a major impact in STEM research, in terms of providing insights and generating hypotheses at a speed not previously possible, and I see that continuing to take place.
In terms of industry, in the area of I’m involved with (in the pharma/biotech/health care sectors), I have already seen machine learning algorithms used to integrate real-world data in the development of clinical trials, and in greater enhancing the ability to find possible matches in the molecular structures of drugs or vaccines that could speed up the drug development phase (the current COVID-19 pandemic has spurred major activity in this area within the company I work for). I see similar developments in many other areas of industries as well.
The one area that has not been mentioned is in the efforts in creating a new infrastructure to create a new market linking providers of services to potential customers. I saw a podcast hosted by Lex Fridman (the Artificial Intelligence podcast) where he interviewed Berkeley CS and Statistics professor Michael I. Jordan discuss his efforts in this area. Watch the whole broadcast, but focus especially on 14:49 and 23:53.
First what is meant by AI (the rise of)?
- Current state of things or what might be happening in a while (Sci-Fi like).
- Does AI mean smart or functioning in some particular technical way?
Anyway, its still going to be a combination of programing and hardware.
It is going to go through different functional expansions with respect to people:
- filling a functional void in some way (being useful)
- expanding into functional areas, which will eventually compete for jobs with people (causing friction)
I am not really that current on what’s going on in AI, but I can consider what kind of artificial smarts (more so than current?) might be useful in the future.
Since I don’t know that much about what is going on in AI, I guess I’ll be presenting a kind wish-list of somewhat ideal goals to be achieved in a not too distant in the future, consequences that might happen.
Increasingly sophisticated assistance in making, producing, publishing research.
Finding new relationships in complex databases of information.
Taking care of the repetitive tasks that a computer can run without human interference.
Deciding when it is appropriate to do those tasks. (initiate and plan experiments)
Increasingly sophisticated literature and data searches.
Identify hitherto unknown relationships between different datasets. (making a discovery)
Eventually replacing graduate student level people, thus reducing the need to train so many excess PhD’s.
Not sure of the effect on the PhD market, PhD’s getting replaced or equivalent intellectual level AI units cost too much.
Increasingly deep knowledge of subject area. (able to discuss aspects of large scale biology on a molecular scale, or from the view of thermodynamics).
Increasingly involved teaching assistances, gradually becoming more autonomous.
Would require a smoother functioning and more reactive (about to quickly take and respond to questions) in order to replace people.
Some may think that a more human looking and reacting robot thing would be more appealing. I don’t think it matters that much if you get the interaction-al personality right.
Like lab, but more production oriented.
Should increase efficiency and flexibility of production, making production more user specific.
Just-in-time to the max.
Replacement of jobs changes how industry relates to local government (jobs for business breaks is a common thing in politics and is based on the production of jobs, which generally means happy voters).
Without the jobs, the poltical equation becomes unbalanced.
The business breaks (usually tax cuts) will be harder to support.
As AI replaces normal people out of jobs at which they functioned well at, people will get pissed of unless it is done by attrition.
Even then, the pool of available jobs of the kind the AI are taking will be shrinking leading to a tightening of competition for new jobs for each year’s set of new workers.
Perhaps there will be an equal or greater number of new jobs being created by the new technology. I am sure that there will be new jobs, maybe better paying, but probably not as plentiful.
Things become a matter of cost. Does it make more sense to make a super competent AI machine if its a better deal than employing a equivalent biological person. (Sci-Fi plot line here)
Political push-back would want to:
- Limit the jobs AI can replace people in (rule some jobs out)
- or Not have AI replacement happen faster than job attrition.
- get some benefit for the population (taxes? training?).
AI is flexible, it can do any of these things.
It can wait for its opportunity (evolutionarially speaking)!
“I for one, welcome (with interest), our developing relationship with our rapidly developing new digital symbionts.
As with the mitochondria-cell relationship, who will be which part of whom?”
The most notable effect of AI in everyday society to my mind is self-driving vehicles, which are quite far along in development and will have a huge social impact. I believe AI will start to assist and even take over other routine jobs that have historically been done by humans as well. It’s quite probable that teaching will be increasingly automated, that AI programs can guide student learning. If this is carried out to the full extent possible, it may put teachers in more of an administrative role, which may be merged with the “guidance counselor” aspects of social work. Teachers will be required for quite some time even if AI teaching is aggressively pursued and funded. They will need to oversee the AI’s and to correct problems where the AI’s fail. There will also still have to be roles related to the design and development of such teaching systems, including a study of how students learn.- This will be a bit different than the traditional teaching role of “hands on” instruction. This won’t happen automatically, though – it will only happen if the effort is made to make it happen.
The political questions of “what should be taught”, and the moral questions of how to teach morality and “proper behavior” will remain. This is an interesting topic, but not really directly related to the AI question, though it’s an important social issue, in my opinion. I’m not sure if AI’s can teach moral behavior, and if they can, I’m not sure that they should. But perhaps it is the parents role to teach such things, though unfortunately it’s unclear to me how well parents, typically both working, are able to do this.
There is and will continue to be, in my opinion, an increasing strain on the social structures and expectations that demand that “eveyryone have a job”. Conservatives will no doubt be unwilling to change this philosophy, even as the available job pools continue to shrink. Artistic, creative, and service jobs may continue to mitigate this to some extent. It would be logical to have some of the people freed up by automation take better care of guiding our children, especially in aspects that cannot or should not be given to AI’s. But I don’t necessarily see this as a current social trend, unfortunately.
I’m mostly retired, after one career teaching mathematics and various programming languages in a two-year college, and another in the software industry. At present, I teach two classes a year in computer architecture, much of which is devoted to what goes on in a computer’s CPU, and how machine instructions cause it to do various calculations — very low-level stuff. For the most part, this is about as far away from AI as you can get.
The only connection I can see is how AI might be used to better predict branches, or jumps to other locations in the code. The CPUs in modern computers use multi-stage pipelines to process instructions, similar to the way an automobile assembly line can speed up the process of assembling automobiles. On the car assembly line, if at one stage it’s found that a part doesn’t fit, the assembly line grinds to a halt. If a computer with a pipelined processor transfers control to a different part of the program, it’s possible that subsequent instructions in the pipeline will need to be discarded. CPU designers have put a lot of effort recently into techniques that the CPU can use to predict whether a branch will be taken or not. If a branch is correctly predicted, all operations currently in the pipeline proceed without a hiccup, but if the branch prediction is wrong, the contents of the pipeline have to be discarded, slowing things down until the pipeline is refilled.
I have a BS in Information Sciences from UW-Milwaukee. I’ve helped manage Physics Forums for over 18 years. I enjoy learning and discussing new science developments. STEM communication and policy are big interests as well.