We asked our PF Advisors “How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?”. We got so many great responses we need to split them into parts, here are the first several. This is part 2. Read part 1 here. Enjoy!
That’s an interesting question. It would help one to understand what AI is.
PNNL uses AI in large and small applications. Data analytics or ‘Big Data’ is one area. AI is useful for analyzing big data sets, but it is or the results are only as good as the data and the rules-based engine.
AI is useful for analyzing networks or systems, and is even more useful if it has foresight, i.e., is predictive, anticipatory and/or insightful. If a prediction or insight is wrong, a system may go unstable and damage or failure may ensue. For some cases, the damage or failure may be benign, i.e., consequences are not significant, but if damage or failure results in injury or death of a person or persons, or animals, then that is obviously catastrophic and irreparable.
Microsoft, Google, Facebook and Amazon all use forms of AI. They claim to enhance the experience, but it is more of manipulation IMO.
In science and engineering, I see AI as being useful for dealing with complex problems with many variables, e.g., in one relevant case, would be finding the optimal composition for a complex alloy, such as stainless steel (Fe-based, or specifically Fe-Cr-Ni-Mo-Mn-(C,N)-based). The base element is Fe, but a certain level of Cr is needed for corrosion resistance, a certain level of Mo is needed for high temperature creep resistance and resistance to hydrogen embrittlement, a certain level of Mn and Ni are needed for austenite stability and toughness, and all affect strength in conjunction with levels of C and N, and subsequent thermo-mechanical treatment. There are minor elements, e.g, Si, Nb, V, Ti, Zr, which are important with respect to binding with O, but also with C and N, which act as dispersed strengtheners. Also, there must be low levels of various impurities, notably S, P, then As, Al, B, Cu, Co, Sn, Sb, . . . . which must be minimized in ensure corrosion resistance and mechanical integrity in adverse environments.
The elements can be combined and analyzed using computational chemistry software, e.g., CALPHAD, or other software in order to determine various thermophysical and thermomechanical properties of an alloy. There is ancillary or complementary software, for determining behaviors like corrosion or creep as a function of environment (including stress, temperature, environmental chemistry, . . . ). Such problems get very large, very quickly). https://en.wikipedia.org/wiki/Computational_chemistry
It gets even more complicated if one then takes an alloy and simulates its response in a radiation field, as in a nuclear reactor environment. A neutron flux results in displacement of atoms in the metal lattice while also leading to activation and transmutation of the isotopes and elements, and the gamma radiation induces electron displacements which influences the chemistry on the atomic level.
So AI, if used correctly can be beneficial. But it also could be misused.
The examples of Microsoft, Google, Facebook and Amazon and use of AI involve monitoring websites visited, browsing content, or online purchases made in order to direct advertisements or news/information in order to influence, which after all is the goal of advertising. One could simply be exercising one’s curiosity about something, but the AI does not ‘understand’ one’s motivation. Nevertheless, one will finds advertisements related to one’s search or query.
Misuse or abuse could be in the form of misinformation. For example, AI which pushes a health treatment, in the absence of critical information, on someone who might have contraindications is a misuse, or even negligence/abuse, of AI, IMO.
Proper use of AI requires truth and factual information (accuracy) to function properly.
Remember AI is a tool, which can be used for positive/productive purposes as well as nefarious purposes, which depends on the user and his/her/their motivations.
I have a different view of AI than many people, because I find it to be poorly defined and often science fiction-y, which makes it potentially less valuable or profound than other people think it is… but I think being mundane is what makes it profound. Maybe that’s because I’m a Star Trek (TNG) fan, and the character Data has influenced my view. Data is a human-like android that in most ways far exceeds human capabilities (strength and intelligence), yet nobody would ever mistake him for a human because he can’t understand basic human emotions and irrational thought — he’s too logical. He can run a starship, but can’t successfully deliver a knock-knock joke!? So if AI is a computer program or robot that can pass for human, I say “why bother?” Or further: “why would we want such a limited and flawed machine?”
The “AI effect” is the idea that anything we haven’t been able to do yet is labeled as “AI”. It’s come about because of the problem with Data: things that at one time were thought to be impossible problems for computers to do have been solved, but the result isn’t a computer that can pass for human, it’s just a really outstanding tool. Handwriting and speech recognition/synthesis for example would seem to be important for AI, but are now often excluded because computers can now do them. These are distinctly human functions that give us a personal connection to the computer but aren’t actually that important to what a computer is/does. For example:
Me: “Hey Siri, what is the square root of 7?”
Siri: “The square root of 7 is approximately 2.64575”.
What’s more important here, the fact that Siri’s voice wasn’t quite believably human or the fact that she spat that answer out in about a second (and displayed the next 50 digits on the screen at the same time!)?
So, how do I see the rise of AI affecting us? By being ubiquitous and largely invisible, the way computers are now, but in increasingly diverse and surprising ways. It’s not about pretending to be human and not quite succeeding, it’s about being everywhere we could possibly want it to be and more places we didn’t think of (but some engineer, somewhere, did). If you ever notice a computer in a place you didn’t expect, doing a function you didn’t expect a computer could ever do, that’s AI to me.
- It’s a thermostat that learns your preferences and analyzes fuel costs to decide what fuel to use and duty cycle to run to maximize energy or cost effectiveness while maintaining comfort.
- It’s a car that learns you like early shifting and torque rather than revving-up the rpm and adjusts its shift changes accordingly.
- It’s a TV/dvr that records a show you don’t even know you want to watch yet.
- It’s a refrigerator that orders the butter and parsley you forgot when you picked-up scallops yesterday, because it knows you always saute them and you’re out.
- It’s a cloud that guesses that you have coronavirus because you haven’t left your bedroom in 3 days and someone who was at the grocery store at the same time as you last week has an uncle he played poker with who is infected.
- It’s a social media platform that guesses you want to be outdoorsey because you’re more likely to “like” a post showing camping and hiking than movies and bowling, but it knows you aren’t because you actually go to the movies and bowling far more often….so it shows you advertisements for movies, not tents.
Yeah, those last few show “AI” can be intrusive if you are of a mindset where you you find surprisingly intimate applications of its knowledge and “thinking” unsettling. But the upside is much bigger, and the “internet of things” and “smart” …everything… is changing our lives for the better in so many ways we’ve barely even thought of yet.
I think the recent media hype on AI is a bit overblown. We are a very long way before we will achieve systems which can handle the conceptual understanding as the human brain does. This is not to say we haven’t made great strides in making, e.g. neural net models more practical and useful. Specifically for pattern recognition and category selection.
I can’t say as to how this research will affect STEM et al but I see how it might. In education there is great potential for improvement in automated learning. However the current trend has been to bend the learner to fit the computerized instruction rather than the reverse. I think this is one area of application where AI research can push to define the new questions that must be answered to move forward. Something like how to automate the teacher’s roll in recognizing why a student made a particular mistake and how to change exposition to more efficiently remediate the student’s erroneous conceptualization.
Neural networks, even with the newer recurrent ones are still deterministic machines. Once trained their outputs can be coded in a direct algorithm. So in once senses they are just another form of coding, one that uses a brute-force method of training. That training can be automated so it is efficient in that sense but the resulting “code” is the obtuse to the programmer and unexpected (principally negative) consequences will arise as we invest too much trust in these hidden algorithms.
I think we are still a few paradigm shifts away from true AI in the sense it is being portrayed today. You see this if you carefully listen to Siri and Alexa and the other voice recognition systems and realize that they never really learn anything in terms of individualized actions. They encode aggregate knowledge based on all responses without the ability to be truly interactive at the level of encoded meaning. This is why they are run on centralized servers and this is why they cannot adapt to individual users other than selecting a set of hard wired finite customization options.
So my prediction is a future of mild disappointment in the “promise of AI” over the next decades until some epiphany leads to another transition in paradigms. Of course such events are wild cards. Their accurate prediction is tantamount to their actualization.
I have a BS in Information Sciences from UW-Milwaukee. I’ve helped manage Physics Forums for over 18 years. I enjoy learning and discussing new science developments. STEM communication and policy are big interests as well.