Insights How AI Will Revolutionize STEM and Society

AI Thread Summary
The discussion centers on the impact of AI on STEM fields, with participants expressing a range of views on its implications in labs, classrooms, and industries. Many participants highlight the speculative nature of AI's future, questioning its true capabilities and the validity of labeling certain technologies as AI. Concerns are raised about the potential misuse of AI and the risks associated with allowing machines to make decisions without human oversight. The dialogue also touches on the blurred lines between AI, machine learning, and traditional algorithms, with some arguing that the term "artificial intelligence" is often misapplied. Participants emphasize the importance of defining AI clearly to avoid confusion and ensure responsible implementation. The conversation reflects a cautious optimism about AI's potential as a tool while acknowledging the need for careful consideration of its limitations and ethical implications.
  • #51
jack action said:
I understand what effect a hill has on me (and even on others).
What relevance does that have in the context of this thread?
If it works as a system with feedback (like the flyball governor), I don't think of it as "identifying a pattern".
No, the flyball governor is Proportional control only. A specific rpm yields a specific throttle position, and that's it. A PID controller learns the responsiveness of the feedback system and adjusts the outputs accordingly. For example, your thermostat will turn on or off before it senses a temperature change because it remembers how long it took the last few times. And it will adjust as that delay adjusts; it will turn on sooner and off later on a hot day than a cooler day.
Being intelligent means to identify a pattern that no one pointed out to you first.
It appears to me the thermostat qualifies by that definition.
Another example: if a computer in a car was taught how to drive in America and then you would put it in England and it would figure out by itself that it must drive on the other side of the road; That would be intelligent.
By that definition, an awful lot of humans aren't intelligent. Most are told ahead of time that they have to drive on the left side of the road and some still mess it up.
That is why I agree with others when they say "artificial intelligence" is thrown around lightly. Most machines defined as AI mostly mimics cognitive functions with some complex feedback mechanisms.
How do you know you don't?
This is why @russ_watters refers to "badly written algorithms", which means the decision process lies with the programmer who feed the causes and effects he knows and the computer program never gets out of these definitions. That's feedback. AI would find either new causes or new effects, or would be given a new cause and deduce the effect based on its knowledge.
I have a lot of bad habits I can't seem to break. I suppose that means I lack some intelligence, but at the same time I'd be ok with blaming my programmer for not writing them better.
 
Technology news on Phys.org
  • #52
russ_watters said:
What relevance does that have in the context of this thread?
It is the basis for "making a decision". It implies you understand that an effect has a cause and that you can act differently according to it. The flyball governor has no knowledge of what is happening. It just get pushed around by other objects, some that may have the intelligence of doing it with a goal of obtaining a certain effect.
russ_watters said:
It appears to me the thermostat qualifies by that definition.
I never said it wasn't. But if you think it qualifies according to my definition, then it is AI for me as well.

I'm no expert on AI, but maybe it could be defined as "advance feedback". Maybe the line is blurry where feedback ends and AI begins. But in the end, saying that a flyball governor is the same as AI seems exaggerated to me. It's sounds like saying a boiler sitting on a fire is the same as a nuclear power plant. The end results might be similar, but the processes are totally different.
russ_watters said:
By that definition, an awful lot of humans aren't intelligent. Most are told ahead of time that they have to drive on the left side of the road and some still mess it up.
I know you understand what I mean, and that wasn't it. Any human, as stupid as it could be - without being told - will realize at one point that everybody does what he or she used to do, the only difference being that they are on the other side of the road. And that it is easier to switch side rather than fight his or her way.

Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.
russ_watters said:
How do you know you don't?
There is a joke going around where AI is defined as a series of nested IF...ELSE statements. That is not AI because the program does exactly what it was initially told to do. But with the computing power that is available today, a program may evaluate so many conditional statements that for us mere humans it looks like intelligence. But it still isn't, it just mimics it.
russ_watters said:
I have a lot of bad habits I can't seem to break. I suppose that means I lack some intelligence, but at the same time I'd be ok with blaming my programmer for not writing them better.
But when you will have a machine with AI, it may make decisions completely unforeseen by the programmer. You can see that as owning a dog. If the dog does something that is unwanted, who is responsible? The breeder, the trainer, the owner or the dog itself? There may not be a single answer that fit all possible cases.
 
  • #53
jack action said:
It is the basis for "making a decision"...

...I know you understand what I mean, and that wasn't it. Any human, as stupid as it could be - without being told - will realize at one point that everybody does what he or she used to do, the only difference being that they are on the other side of the road. And that it is easier to switch side rather than fight his or her way.

Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.

There is a joke going around where AI is defined as a series of nested IF...ELSE statements. That is not AI because the program does exactly what it was initially told to do. But with the computing power that is available today, a program may evaluate so many conditional statements that for us mere humans it looks like intelligence. But it still isn't, it just mimics it.

But when you will have a machine with AI, it may make decisions completely unforeseen by the programmer. You can see that as owning a dog. If the dog does something that is unwanted, who is responsible? The breeder, the trainer, the owner or the dog itself? There may not be a single answer that fit all possible cases.
So again: how do you know that you don't?

I'm not being glib with that question, I really mean it/would like an answer. How do you know that you aren't just acting on a an extremely complex set of if/then/else statements based on an extremely complex set of inputs? Even worse, how can you be sure that what differentiates you from a computer isn't that you suck at it?

People, dogs and gnats are unpredictable largely because they suck at being logical, not because they are intelligent. Maybe part of the definition issue is that it isn't "intelligence" people think of when they think of AI, but artificial life. They want something that feels real, even if that actually makes the system inferior in many ways. If that's what people think of and want, fine, but that's not what I'd be after.
 
  • Like
Likes gleem
  • #54
jack action said:
The flyball governor has no knowledge of what is happening.
Neither does a neural network.

If the issue is what machines can do and should do, what is the utility of a narrow definition of AI?

Pattern recognition is a method. Our policies should be method independent. If not, they can be obsoleted overnight.
 
  • Like
Likes russ_watters
  • #55
jack action said:
Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.
FYI, regarding language:
http://news.mit.edu/2018/machines-learn-language-human-interaction-1031

I don't think that example is of learning a language from scratch, but rather learning the use of language in everyday life. We're always going to build language and intelligence programs into computers simply because it's easier that way. It takes decades to teach a human to be functional -- why would we want that in a computer when you can copy and paste? But in this example, the computer is learning to be more conversational in the way it speaks. It starts as mimicking (like a parrot or a human would), but here's a telling part in the article:
A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,”
The robot starts out speaking properly, but learns by mimicking humans to speak poorly. So again, the defining characteristic that it lacks is imprecision. It needs to learn how to be bad at speaking to be more human!

Is this really what we want out of AI?
 
  • #56
russ_watters said:
How do you know that you aren't just acting on a an extremely complex set of if/then/else statements based on an extremely complex set of inputs?
Like I said, maybe one could define intelligence as "advanced feedback" and there is no clear line where one begins and where one ends. How do you know that you are a human and not an ape or a tree? Apparently, there's enough differences that we created different words for them.

That is a problem that arises from the fact that humans love to classified things, where such thing doesn't really exists in nature.
russ_watters said:
Even worse, how can you be sure that what differentiates you from a computer isn't that you suck at it?

People, dogs and gnats are unpredictable largely because they suck at being logical, not because they are intelligent.
I don't agree with the use of "suck at it". My definition of intelligence is "the ability to identify patterns". I prefer saying some can be better at it then others. Humans are most likely the best at it. But if I compare a cheetah with a human, I say that its ability to run is better, not that the human suck at running. The latter implied that the human is not a worthy living being because he can't run as well as the best of the living being. I don't want to imagine what one would think of a fish with that kind of thinking ...
russ_watters said:
even if that actually makes the system inferior in many ways.
russ_watters said:
The robot starts out speaking properly, but learns by mimicking humans to speak poorly. So again, the defining characteristic that it lacks is imprecision. It needs to learn how to be bad at speaking to be more human!
Here you have a lot of judgements about AI which doesn't affect its definition. How do you define "inferior"? What is "speaking properly" and "speaking poorly"? I don't think you'll find scientific laws to define this.
russ_watters said:
Is this really what we want out of AI?
That is a good reflection for the present segment of "Ask the advisors". I can't wait to read people's opinions on this.
russ_watters said:
It takes decades to teach a human to be functional -- why would we want that in a computer when you can copy and paste?
If you look at AI as a machine that can do what I can do, it just does it instead of me, that view is logical. But AI can be much more and that's where I find it interesting. If we assume that a cure for cancer exists, humans most likely can find it by looking for patterns that link causes and effects. It may take decades, maybe centuries. But with AI, it will most likely go faster. AI is the power tool to find patterns. Sure, you can use AI to do simplistic tasks - like you can use an electric drill to make a hole in a piece of paper - but I find this to simply be a waste of resources for amusement or just say "I do it because I can."
anorlunda said:
If the issue is what machines can do and should do, what is the utility of a narrow definition of AI?

Pattern recognition is a method. Our policies should be method independent. If not, they can be obsoleted overnight.
By using the word "policies", I think you are worrying about what type of decisions we should leave to machines. If so, I do share some concerns about this and I did share them in my answer to @Greg Bernhardt 's question. Short answer: I fear that people will trust blindly a machine's decision and not their own. That applies to AI or flyball governors, but the more a machine "look" smart, the easier it is to go down that path.

But I feel @Greg Bernhardt 's question was broader than this subject alone.
 
  • #57
I have suppressed my urge to post anything about AI since I believe intelligence as well as artificial intelligence both have rather obscure definitions.
-
I will say what no one else has said in this thread so far (I think): Human intelligence and some animals display this as well, have an ability to say "What if..." A very eccentric engineer once told me that humans compared to machines can solve problems with significantly missing data. In other words, we say "What if..."
 
  • Like
Likes nsaspook and BillTre
  • #58
Averagesupernova said:
I have suppressed my urge to post anything about AI since I believe intelligence as well as artificial intelligence both have rather obscure definitions.
-
I will say what no one else has said in this thread so far (I think): Human intelligence and some animals display this as well, have an ability to say "What if..." A very eccentric engineer once told me that humans compared to machines can solve problems with significantly missing data. In other words, we say "What if..."
So do you believe that AI is forever doomed to not be able to do extrapolation? I don't.
 
  • #59
phinds said:
So do you believe that AI is forever doomed to not be able to do extrapolation? I don't.
No, I can't say for sure that I believe that. There are systems out there now that are programmed to ignore inputs that are outside normal limits. But they are basic simple systems such as an engine control module that realizes a bogus reading from an oxygen sensor and simply meters fuel in based on a preloaded table. It notifies the driver that there is a problem and it ends there. It does not troubleshoot to find a defective sensor, broken wire, connection, etc. If it did, it would have a limited number of scenarios to troubleshoot for. A human would notice wiring that does not look original (for instance) and suspect someone at one time made an improper repair or something of this nature. It takes humans the better part of a lifetime of learning by experience. Doing a job based on looking something up in a troubleshooting guide is no different than a set of if-then instruction in a computer. It will never come close to human experience, which in my opinion is required for good extrapolation.
 
  • #60
Averagesupernova said:
A human would
Why compare it with humans at all? Is it because of your understanding of the word intelligence?
 
  • #61
anorlunda said:
Why compare it with humans at all? Is it because of your understanding of the word intelligence?
You have a valid point. And the (in my opinion) obscure definitions cause some trouble when discussing it. Right now as far as I know human intelligence is the highest known, or at least recognized. What would you compare it to?
-
A number of years ago I troubleshot test equipment in a factory setting. Part of my job was to create flow charts to indicate what part of a circuit board the calibrator was to inspect based on a particular failure. Wrong part here? Wrong part there? Etc. What this amounts to is taking my intelligence and experience and condensing it down to a series of if-thens. The longer we manufactured a certain product the more detailed the flow charts became. After a certain time of course, there was a net loss in productivity by further detailing the flow charts. There was never a time when a troubleshooter such as myself was not required.
 
  • #62
Averagesupernova said:
You have a valid point. And the (in my opinion) obscure definitions cause some trouble when discussing it. Right now as far as I know human intelligence is the highest known, or at least recognized. What would you compare it to?
Even human intelligence can vary in unexpected ways. A friend of mine named Dave was a factory supervisor somewhere in Africa many years ago and he was assigned to get a wooden fence built along the back of the property.

He told the foreman of the workers that he wanted the fence to be absolutely straight and just to be sure he was clear, he showed the foreman a picture of a perfectly straight fence. The foreman assured Dave that the fence would be absolutely straight and then proceeded to build the fence in what Westerners would consider a crooked, slightly rambling line that probably never deviated from the center line by more than a foot or so but which to Dave was clearly NOT straight.

The foreman was flabbergasted that Dave did not see the fence as straight. Again Dave showed the foreman the picture and pointed out that the fence wasn't the same as the picture. The foreman insisted that the two fences were identically straight.

When Dave complained to the factory owner that the foreman was either obstructionist or mentally retarded, the owner just laughed and explained to Dave that for a African of that tribe, that WAS absolutely straight and you would never convince him otherwise.

You can argue that it was a difference in definitions, but I argue that it was a different way of THINKING about what "straight" means.
 
  • #63
phinds said:
You can argue that it was a difference in definitions, but I argue that it was a different way of THINKING about what "straight" means.
As far as I am concerned, doesn't the definition of straight or any word depend on how it is thought about?
-
Interpretation of definitions can certainly get us in trouble. But, for someone to argue that those two fences were identical is not playing with a full deck if they understand the definition of the word identical. I would have expected the foreman to say that they thought it was straight enough. The foreman did display a certain amount of intelligence based on the fact that (I'm assuming here) the fence was built in such a way as to do the job that he/she assumed it was meant to do.
 
  • #64
Averagesupernova said:
What would you compare it to?
How about just looking at the utility of the machine without needless comparisons?
 
  • #65
anorlunda said:
How about just looking at the utility of the machine without needless comparisons?
So what are you saying? It's pointless to compare a lowly thermostat to the intelligence of a normal human being? Intelligent enough is good enough? That doesn't make for stimulating conversation, nor does it give us any idea of how to proceed to make something smarter by having never compared one system with another.
 
  • #66
fresh_42 said:
And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
That was human error, or malfeasance. MCAS could have been successful IF there was a way to detect inconsistencies of inputs from various instruments such that a fault (input error) could be detected and corrected. In the case of the aircraft's control system, there should have been a way for the system to know if an angle of attack (AOA) sensor was faulty (damaged). Left and right sensors should have been compared, and any difference/inconsistency challenged. The pilots needed training in over-riding (disconnecting) the system. Boeing engineering staff and management, and FAA (oversight), are ultimately responsible.

Error detection and correction is one area in AI needing attention. Currently, humans do the program and write the rules. Full intelligence requires learning from successes AND failures (errors).

In a wholly different area: A former colleague did a thesis on "Artificial Neural Network Modeling of Mechanical Properties of Seamless ___________ Tubing". I've seen similar works for various alloys. It requires knowledge of the alloy system, the sensitivity of mechanical properties (strain hardening and strain rate hardening) to composition, and the manufacturing process. Seamless tubing involves multiple mechanical reduction steps with intermittent thermal treatment (annealing), but annealing can be for recrystallization, or solution annealing, the latter being at greater temperature than simple annealing. The solution annealing temperature must be set in conjunction with time in order to prevent grain growth (this relationship will depend on the composition and level of cold work (dislocation density) in the alloy structure.
A related paper - https://pureadmin.qub.ac.uk/ws/files/377750/132.pdf

With respect to what is considered AI, did you know that there are at least 40 types of hammers.
https://www.garagetooladvisor.com/hand-tools/different-types-of-hammers-and-their-uses/
 
Last edited:
  • #67
Astronuc said:
With respect to what is considered AI, did you know that there are at least 40 types of hammers.
Yes.
 
  • #68
Averagesupernova said:
So what are you saying?
I say that we should judge machines by their utility and their risks. The methods (dumb/smart) that they use to achieve that are irrelevant.

A self driving car has benefits and risks, regardless of whether it uses AI or not.

I think machines of the future will be extremely capable. I also think that the experts will never stop debating whether those machines really have AI.
 
  • Like
Likes BillTre and Ibix
  • #69
Game changer in the chess world! The extract below is from https://en.wikipedia.org/wiki/AlphaZero

DeepMind stated in its preprint, "The game of chess represented the pinnacle of AI research over several decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a generic reinforcement learning algorithm – originally devised for the game of go – that achieved superior results within a few hours, searching a thousand times fewer positions, given no domain knowledge except the rules."[1] DeepMind's Demis Hassabis, a chess player himself, called AlphaZero's play style "alien": It sometimes wins by offering counterintuitive sacrifices, like offering up a queen and bishop to exploit a positional advantage. "It's like chess from another dimension."[9]
 
  • Like
Likes berkeman
  • #70
We asked our PF Advisors “How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?”. We got so many great responses we need to split them into parts, here are the first several. Enjoy!

156839.jpg


Ranger Mike
If things run true as before, and I have seen no vast improvement in the correct forecasting of future trends from these areas, I see lots of money going in these areas but not much usable product coming out. I chose not to dwell on such predictions like back in the early 1980s when we were told the factory of the future would be a lights out manufacturing trend with only a few humans doing maintenance to keep the...

Continue reading...
 

Attachments

  • 156839.jpg
    156839.jpg
    3.5 KB · Views: 140
  • Like
  • Love
Likes bhobba, berkeman and Adesh
  • #72
Pretty interesting read. I look forward to the next installment.
 
  • #73
fresh_42 said:
Yes.
Although I quoted your post, the last statement was not directed to you personally, but to the reader. I perhaps should have made that more clear.
 
  • #74
Last edited by a moderator:
  • #75
I am surprised that majority of advisors seems to be rather sceptical in teir opinions. Personally, I tend to agree with @bhobba, he mentioned some very interesting examples.
 
  • #76
I think that by now AI should be part of rudimentary education (not the technical details, but the gist and scope). Many people don't seem to realize the role it has in modern societies and economies. News, politics, the stock market, social media, advertising, the internet; by now it's all driven mostly by AI rather than people. It's the reason why companies are trying to collect as much information about you as possible. You could say that it offers the power to have a customized Sith Lord for every citizen (per say). That is, manipulation is customized, as AI can utilize each persons situation and psychological weaknesses to optimize the effect it is able to have on your behavior. That's the big business aspect. And this is the thing everybody should understand so that we can have a conversation about the ethics and impacts. It should be combined with education in critical thinking and ethics. Everyone should know about what/who is trying to manipulate them and how.

Besides the role of AI in manipulating human behavior, advancements in autonomous robotics is set to further transform a number of areas: mining, war, espionage, space, manufacturing, farming, etc. The specifics about exactly how and when might be slightly uncertain, but overall and in general it's pretty clear and simple. The main factors that would change things are human intervention to regulate how AI is used, and competition between groups of people for control and domination. Besides that, if you look at the incentives and what is possible, you can get a good idea of what the future is likely to look like.

Personally, I think space is the big one. Modern AI is just about at the level where many of the key breakthroughs, envisioned from the beginning by people such as Von Neumann, are feasible. This includes interstellar space missions, massive industries in outer space, terraforming, etc. How far out these things are, is not clear. They currently still require extensive human input in terms of design and engineering. But at some level of achievement these things could be ramped up in scale enormously. We could, for example, launch a single automated mission to send probes to millions of stars, and then millions of probes from each of those million stars, and so on.

If you go further out, you can expect a time when science, mathematics, and engineering are also dominated by AI. In that case, it is relevant to wonder what role the human being has. The AI will develop insights, construct proofs, record observations, do analysis, pose new questions, maintain an awareness of the state of the art, etc. It will share this information in a distributed way in some non-human readable form. People would, by default, have little clues what is going on, but will notice improvements in technologies. We will likely act as managers giving approval on high level projects, while balancing trying to micromanage things we don't understand. Efforts will be made to figure out how to improve communication between AI and people, so that we can understand as much as possible in terms of what they are learning and doing, and participate as much as possible in decision making. Many proofs, analytic functions, and rationals, will be too large and complex to fit in human memory in order to be understood.
 
Last edited:
  • #77
Jarvis323 said:
That is, manipulation is customized, as AI can utilize each persons situation and psychological weaknesses to optimize the effect it is able to have on your behavior. That's the big business aspect. And this is the thing everybody should understand so that we can have a conversation about the ethics and impacts. It should be combined with education in critical thinking and ethics. Everyone should know about what/who is trying to manipulate them

Humans have been manipulating humans from time in memoriam. Advertising of any type is has an element of manipulation from downright lies to psychology ("The Hidden Persuaders": by Vance Packard 1957). Now as you note it can target individuals. as always, caveat emptor. It isn't AI that is manipulating us it is humans.

Jarvis323 said:
The main factors that would change things are human intervention to regulate how AI is used, and competition between groups of people for control and domination.

We can regulate the overt use of AI but not the surreptitious use. How do you know an AI app is monitoring you? How can we determine what that use is?

What we do with AI is our decision at this time. It will change our behavior as we see benefits from its implementation. That change may expose us to or create problems heretofore unknown. Computers made the internet possible allowing us to expose our entire lives to the world if we choose making us vulnerable to crime and exploitation. But, then AI might help us. Perhaps we will develop a cyber Gort ("The day the Earth stood still") to monitor the web and protect us from ourselves.
 

Similar threads

Replies
2
Views
3K
Replies
1
Views
2K
Back
Top