Insights How AI Will Revolutionize STEM and Society

AI Thread Summary
The discussion centers on the impact of AI on STEM fields, with participants expressing a range of views on its implications in labs, classrooms, and industries. Many participants highlight the speculative nature of AI's future, questioning its true capabilities and the validity of labeling certain technologies as AI. Concerns are raised about the potential misuse of AI and the risks associated with allowing machines to make decisions without human oversight. The dialogue also touches on the blurred lines between AI, machine learning, and traditional algorithms, with some arguing that the term "artificial intelligence" is often misapplied. Participants emphasize the importance of defining AI clearly to avoid confusion and ensure responsible implementation. The conversation reflects a cautious optimism about AI's potential as a tool while acknowledging the need for careful consideration of its limitations and ethical implications.
Messages
19,786
Reaction score
10,738
We asked our PF Advisors “How do you see the rise in AI affecting STEM in the lab, classroom, industry, and or in everyday society?”. We got so many great responses we need to split them into parts, here are the first several. Enjoy!


156839.jpg


Ranger Mike
If things run true as before, and I have seen no vast improvement in the correct forecasting of future trends from these areas, I see lots of money going in these areas but not much usable product coming out. I chose not to dwell on such predictions like back in the early 1980s when we were told the factory of the future would be a lights out manufacturing trend with only a few humans doing maintenance to keep the machines running. That and America being a service economy only in future did not prove true nor did...

Continue reading...
 

Attachments

  • 156839.jpg
    156839.jpg
    4 KB · Views: 53
  • 156839.jpg
    156839.jpg
    3.7 KB · Views: 47
  • 156839.jpg
    156839.jpg
    3.7 KB · Views: 39
  • 156839.jpg
    156839.jpg
    3.7 KB · Views: 43
  • 156839.jpg
    156839.jpg
    3.5 KB · Views: 670
  • 156839.jpg
    156839.jpg
    3.5 KB · Views: 215
  • 156839.jpg
    156839.jpg
    3.5 KB · Views: 176
Last edited:
Technology news on Phys.org
This means I won't have to answer? I didn't so far, as I have nothing profound to say about it.
 
  • Like
Likes jim mcnamara, member 587159, russ_watters and 1 other person
Same here. Glad the other troops stepped up! :smile:
 
  • Like
Likes Wrichik Basu, member 587159, Greg Bernhardt and 1 other person
I did not participate, as my algorithms could not process this, and returned NaN.
 
  • Haha
  • Like
Likes Wrichik Basu, Astronuc, Delta2 and 3 others
Sigh, I feel the subject is so speculative ATM I preferred to not even try.

Not that I doubt we will see more and more AI in the future. I am just not sure what it will really be.
 
Same for me. Now I don't have to think of anything clever about it.
 
fresh_42 said:
This means I won't have to answer? I didn't so far, as I have nothing profound to say about it.
berkeman said:
Same here. Glad the other troops stepped up! :smile:
George Jones said:
I did not participate, as my algorithms could not process this, and returned NaN.
Borek said:
Sigh, I feel the subject is so speculative ATM I preferred to not even try.

Not that I doubt we will see more and more AI in the future. I am just not sure what it will really be.
Doc Al said:
Same for me. Now I don't have to think of anything clever about it.

Since I have no idea what I'm talking about I just went ahead and speculated. :smile:
 
Last edited:
  • Like
  • Haha
  • Love
Likes bhobba, Wrichik Basu, jim mcnamara and 7 others
Wait. I don't know a great deal about the science of AI either but I do routinely follow developments as you might know because of thread I introduced last year "Notable Accomplishment in AI".

OK, AI is a bit speculative especially AGI. But the impact will still be great even if AGI is not attainable. AI can be a tool for the solution to an almost infinite number of varied problems or an instrument for the creation of some very serious ones. Considering this I am surprised that members of this forum are so disinterested.

So yeah I'm willing to stick my neck out a bit.
 
gleem said:
Considering this I am surprised that members of this forum are so disinterested.
I don't see it as lack of interest, I see it as an unwillingness to speculate by some people which is quite different. Besides, Greg said the response was amazing. How do you interpret that as "disinterested"?
 
  • Like
Likes Greg Bernhardt
  • #10
phinds said:
I don't see it as lack of interest, I see it as an unwillingness to speculate by some people which is quite different.
My problem was, that I am not quite sure where algorithmic determinism and data analysis ends and AI begins. I doubt that all programs labeled AI deserve this categorization. So it wasn't the unwillingness to speculate, rather was it the unwillingness to check whether and where the label is justified. I have seen too many pigs chased through the village in my life as we say here to take any of those for serious in advance. The odds that it is just the latest trend and nothing substantial behind it, yet, are simply to high. I'd rather read a scientific paper in Computer Science than judging whether a program is AI or not.
 
  • Like
Likes nsaspook, bhobba, StatGuy2000 and 2 others
  • #11
fresh_42 said:
My problem was, that I am not quite sure where algorithmic determinism and data analysis ends and AI begins.
Perhaps we could write some kind of learning algorithm to help with that task...
 
  • Haha
Likes Astronuc, berkeman and BillTre
  • #12
Sorry Greg I think I had a mental flatulation. I was so taken aback by the posts in this thread that I overlooked (dont't ask me how) your statement.

@Greg Bernhardt I have begun a response, do I send it to you or post in the thread when it is created.
 
  • Like
Likes berkeman and Greg Bernhardt
  • #13
gleem said:
@Greg Bernhardt I have begun a response, do I send it to you or post in the thread when it is created.
Great! You can send it to me, thanks!
 
  • #14
My problem is that I know not enough to make a statement about AI. From time to time I've heard talks about "deep learning algorithms" used as a special data-analysis technique, but that's all I've so far heard about it from non-popular sources.
 
  • #15
Here's a nice summary.
https://www.the-scientist.com/magazine-issue/artificial-intelligence-versus-neural-networks-65802
"AI refers to any machine that is able to replicate human cognitive skills, such as problem solving," pattern recognition, or signal/signature analysis.

AI programs are found at many universities and scientific institutions. It has evolved with the advancement of computational resources, i.e., systems with faster and more numerous processors.

A bit more detail on neural networks.
http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414
 
  • #16
IMO AI should be defined as capable to lie on purpose.
 
  • Haha
  • Like
Likes BillTre and russ_watters
  • #17
AI is simply a tool. It can be used (for beneficial purposes) or misused (for harmful purposes).
 
  • Like
Likes vanhees71 and russ_watters
  • #18
phinds said:
Since I have no idea what I'm talking about I just went ahead and speculated. :smile:
When I think a question is too broad/vague or speculative, I just ignore it and ask and answer my own!
 
  • Like
Likes Evo and BillTre
  • #19
Heck, I thought everyone here but me is AI anyway...

If I prick you, do you not... short out?
 
  • Like
  • Haha
Likes russ_watters and BillTre
  • #20
chemisttree said:
Heck, I thought everyone here but me is AI anyway...

If I prick you, do you not... short out?
I don't know about the other guys here but if you try to prick me, I'm calling the cops !
 
  • #21
fresh_42 said:
My problem was, that I am not quite sure where algorithmic determinism and data analysis ends and AI begins. I doubt that all programs labeled AI deserve this categorization. So it wasn't the unwillingness to speculate, rather was it the unwillingness to check whether and where the label is justified. I have seen too many pigs chased through the village in my life as we say here to take any of those for serious in advance. The odds that it is just the latest trend and nothing substantial behind it, yet, are simply to high. I'd rather read a scientific paper in Computer Science than judging whether a program is AI or not.

I agree with. In my field (experimental SC quantum computing) AI comes up in two contextsL

1) In helping run experiments. Primarily to optimise experimental parameters "on the fly" when dealing with very large parameter spaces (which is very often the case) and/or in feedback. Here the "boundaries" between AI, ML and regular optimisation are very blurry.
See e.g.
Lennon, D.T., Moon, H., Camenzind, L.C. et al. Efficiently measuring a quantum device using machine learning. npj Quantum Inf 5, 79 (2019). https://doi.org/10.1038/s41534-019-0193-4

However, some of the "ML" work is little more than clever curve fitting, so I wouldn't count that as AI.

2) As a possible application. Quantum machine learning is quite a hot topic at the moment (e.g. image recognition) and there is also work on things like deep learning (which I guess is more like "proper" AI)
 
  • Like
Likes berkeman, Astronuc and fresh_42
  • #22
fresh_42 said:
My problem was, that I am not quite sure where algorithmic determinism and data analysis ends and AI begins. I doubt that all programs labeled AI deserve this categorization. So it wasn't the unwillingness to speculate, rather was it the unwillingness to check whether and where the label is justified. I have seen too many pigs chased through the village in my life as we say here to take any of those for serious in advance. The odds that it is just the latest trend and nothing substantial behind it, yet, are simply to high. I'd rather read a scientific paper in Computer Science than judging whether a program is AI or not.

You raise a good point, which is connected to my own reply in the Conversations regarding AI.

I find that discussions regarding AI are often confused, because there are a number of different definitions on what artificial intelligence actually encompass.

According to Chapter 1 of the best-selling and standard textbook on AI -- Artificial Intelligence: A Modern Approach, by UC Berkeley computer scientist Stuart Russell and Google researcher Peter Norvig (a copy of which I own) -- AI can be defined in four different areas:

1. Thinking humanly
2. Thinking rationally
3. Acting humanly
4. Acting rationally

#1 and #2 together are concerned with thought processes and reasoning, whereas #3 and #4 together are concerned with behaviour.

#1 and #3 together measure success in terms of fidelity to human performance, whereas #2 and #4 together measure against an ideal performance measure, which we call rationality.

Any discussions regarding what AI is or is not needs to take the above definitions into account.
 
Last edited:
  • Like
Likes russ_watters
  • #23
"A rose by any other name would smell as sweet" It doesn't matter what you call it, machine learning, expert systems, natural language processing, intelligent retrieval, artificial learning, etc. Let us not get hung up on what we call AI and just concentrate on why, how, and where this technology can or should be implemented. Concentrate on the benefits and risks of its implementation. Develop plans and strategies to maximize the benefits relative to risks before things get out of hand and we end up with untenable problems.
 
  • #24
fresh_42 said:
IMO AI should be defined as capable to lie on purpose.
I think that's an end game scenario. We'll have highly versatile, useful, and productive AI LONG before we get to that.
 
  • Like
Likes vanhees71
  • #25
phinds said:
I think that's an end game scenario. We'll have highly versatile, useful, and productive AI LONG before we get to that.
I'm still insecure whether AI isn't simply a better data analysis method rather than Intelligence. It isn't that long ago that we refused the animal world to have any intelligence beyond instinct and heritage at all! At the same time we started to use the term AI? This doesn't make much sense to me. Some species are capable to lie. Yet, we refused to call it Intelligence. And now some data mining tools should be called as such? Ridiculous.
 
  • Like
Likes russ_watters
  • #26
fresh_42 said:
I'm still insecure whether AI isn't simply a better data analysis method rather than Intelligence. It isn't that long ago that we refused the animal world to have any intelligence beyond instinct and heritage at all! At the same time we started to use the term AI? This doesn't make much sense to me. Some species are capable to lie. Yet, we refused to call it Intelligence. And now some data mining tools should be called as such? Ridiculous.
I think it gets down to just arguing about terminology. I'm an engineer. I'm more interested in actual characteristics than I am in names.
 
  • Like
Likes gleem
  • #27
phinds said:
I think it gets down to just arguing about terminology. I'm an engineer. I'm more interested in actual characteristics than I am in names.
Not really. The question is: What makes AI different from all classical algorithms? Is it a new property or a only a sophisticated algorithm?

But I admit that I'm playing the devil's advocate in this discussion and not all of my statements would survive a further analysis. Personally, I don't like the "I" in AI. To me this is a sign of the usual human arrogance rather than a justified name. And a point of sales. So far we have had (probably among others):
ERP's (80's), IT certification (90's), OO (90's), Customer Relationship Management (90's), Risk Analysis and Backup Systems (00's), Data Mining (00's), Cloud Computing and other remote concepts (10's) and now it is quantum computing and AI. It keeps consultancies busy and fills team meetings.

In this sense it is about terminology. But is also the question whether the word Intelligence is justified, especially under the scope that we refuse(d) to label Intelligence in non human nature. If terminology suggest a property which isn't there, then terminology becomes an issue.
 
  • #28
@fresh_42 You may be interested in one of the first AI projects (programs) call the "Logic Theorist" developed by A. Newell, J C Shaw, and H A Simon. Newell was a cognitive psychologist. Among other things, they used it to prove theorems from Ch 2 of Whitehead and Russell's "Mathematica Principia".

You can download it at https://www.rand.org/pubs/papers/P1320.html
 
  • #29
fresh_42 said:
The question is: What makes AI different from all classical algorithms? Is it a new property or a only a sophisticated algorithm?
No matter what is the answer to those questions, The question that was asked was:«How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?»

One point I raised in my answer is the fact that a lot of people want to use AI (or whatever you want to call that technology) to make final decisions, without any human supervision (like driving a car or maybe even make societal choices). If we do go down that road (should we allow it?), we will have to redefine what responsibilities one individual has towards another one or even the group.

You must have an opinion about that.
 
  • Like
Likes russ_watters and BillTre
  • #30
jack action said:
You must have an opinion about that.
I do. I would consider it as an additional parameter rather than a trigger for decisions. But I'm a bit old fashioned and things are far less dramatic in reality than Asimov's laws would suggest. It is not the apocalypse waiting around the corner, neither the futurological congress, nor skynet, nor the matrix. It will be a useful tool and certainly not all fantasies will come true. I am strongly against decision making by machines, It causes people to drive into rivers because GPS tells them to do. Just a few hours ago, Cortana informed me where she (?) put my alerts into. As if that was an "I". If I knew how, I would deinstall that rubbish.
 
  • #31
fresh_42 said:
... Cortana informed me where she (?) put my alerts into. As if that was an "I". If I knew how, I would deinstall that rubbish.
I also find Cortana incredibly obnoxious. I got rid of it as soon as I installed Win10 but that's been several years now and I don't recall what all I did. I do know I found instruction on the internet. We are not the only ones who hate it.
 
  • #32
jack action said:
No matter what is the answer to those questions, The question that was asked was:«How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?»

One point I raised in my answer is the fact that a lot of people want to use AI (or whatever you want to call that technology) to make final decisions, without any human supervision (like driving a car or maybe even make societal choices). If we do go down that road (should we allow it?), we will have to redefine what responsibilities one individual has towards another one or even the group.

You must have an opinion about that.
There was this book " Weapons of Math Destruction" based on that, on decisions made by machines bypassing or ignoring human judgement and the risks associated with it.
 
  • #33
So @Greg Bernhardt , when will these be published?

I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device. It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
1593431786791.png


My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

If we have technology providing benefits on one hand, risks on the second hands, and displacing human workers on the third hand, what difference does it make if that technology deserves the label AI?

IMO debate over what AI really means is a needless distraction. IMO claims that AI is a discrete approach or "quantum leap" are false. Comparisons to human intelligence and the magic of cognition are not helpful. We will have continuous spectrum of machine abilities, and methods to achieve those abilities.
 
  • Like
Likes 256bits, Averagesupernova, russ_watters and 1 other person
  • #34
anorlunda said:
So @Greg Bernhardt , when will these be published?
First part tonight or tomorrow!
 
  • Like
Likes anorlunda
  • #35
fresh_42 said:
IMO AI should be defined as capable to lie on purpose.
I wrote some AI for myself:
Code:
main( ) {
        printf("You are smart and attractive. Everybody likes you.\n");
}
 
  • Haha
Likes Astronuc, phinds and anorlunda
  • #36
anorlunda said:
It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
I wouldn't go that far. Otherwise, one could say that when a ball sitting at the top of a hill starts rolling down, it "decided" to roll down. It didn't. It just reacts to changes in its environment.

To me, intelligence is the capacity to identify patterns. I believed we can make machines that can do this, although I'm not sure how far we have gone into that domain, yet. The intelligence behind the flyball governor doesn't comes from the machine itself, it came from someone who saw a pattern between an object in rotation and the forces it created.

What people are afraid from AI comes from the fact that they link "intelligence" to "being alive". But there are no direct links between the two. Being alive is mostly about being able to replicate yourself. That relates to the concept of autonomy, the capacity to function independently. An entity doesn't need intelligence to replicate itself. And an intelligent machine that was designed to find patterns in, say, scientific papers, will not "evolve" to replicate itself.

Even if we assume that an intelligent machine will evolve to replicate itself - and we are really far from that - some people worry that the machines will go on to destroy humans. But that is just a very pessimistic assumption. They are plenty of life forms on this planet and none of them have a goal of destroying other life forms. And from what we understand, diversity is important for survival and there are only disadvantages when it comes to destroying other life forms. Why would a new intelligent life form (designed by us) comes to a different conclusion?
 
  • #37
jack action said:
To me, intelligence is the capacity to identify patterns.
Therein is the problem with public discussions. Unless the discussion is preceded by rigid and boring definitions of common words, each participant uses his/her personal favorite definition and we wind up talking past each other. For example, define "decide" for all possible relevant contexts. We have something like an uncanny valley that poisons public discussions of AI.
 
  • Like
Likes 256bits and russ_watters
  • #38
anorlunda said:
I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device. It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
1593431786791.png
1593431786791.png


My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

This reminds me of a story I heard from an aquacultural engineer I know who makes/installs mostly automated water systems (they call them life support systems) for various kinds of fish facilities.
They had done an install in a Central or South American shrimp farm and went back after a year or so to check up on how it was working out (maintenance contract possibly).
They found a guy sitting on a chair in the pump room (where most of the equipment is). He was there because one of the automated valves had stopped working. He would manually control the valve went appropriate. This was a cheaper solution (his salary) than getting the valve fixed at the time.
My friend say he was a very loyal employee.

Maybe that's Artificial Machine Intelligence (AMI)!

Since I don't know a lot about details of AI function and definitions, I also took an approach that I would address AI simply as increased machine smarts and proceeded from there.
 
  • Haha
Likes anorlunda
  • #39
fresh_42 said:
Not really. The question is: What makes AI different from all classical algorithms? Is it a new property or a only a sophisticated algorithm?
anorlunda said:
I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device...

My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

If we have technology providing benefits on one hand, risks on the second hands, and displacing human workers on the third hand, what difference does it make if that technology deserves the label AI?

IMO debate over what AI really means is a needless distraction. IMO claims that AI is a discrete approach or "quantum leap" are false. Comparisons to human intelligence and the magic of cognition are not helpful. We will have continuous spectrum of machine abilities, and methods to achieve those abilities.
There's a decent chance you guys will like my response. I share the concern/annoyance about definitions and the pop-culture feel of the term, so I just drove over it and declared a broad definition to answer the question.

Although:
fresh_42 said:
...I am strongly against decision making by machines, It causes people to drive into rivers because GPS tells them to do.
That's a provocative statement. You do own/use a thermostat, don't you?

But otherwise I agree with the sentiment; the level of decision making we leave to machines can become problematic if it isn't done carefully, or if the machines or people aren't prepared for the shift in responsibility.
 
  • #40
jack action said:
I wouldn't go that far. Otherwise, one could say that when a ball sitting at the top of a hill starts rolling down, it "decided" to roll down. It didn't. It just reacts to changes in its environment.
What makes you think you are any different from that ball?
To me, intelligence is the capacity to identify patterns. I believed we can make machines that can do this, although I'm not sure how far we have gone into that domain, yet.
What kind of/complexity of patterns? Does a PID controller on a thermostat qualify? It learns the responsiveness of your HVAC system and adjusts the timing of the start/stop to avoid undershoot and overshoot.
 
  • #41
russ_watters said:
That's a provocative statement. You do own/use a thermostat, don't you?
These accidents with people driving into rivers actually occur. And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
 
  • #42
fresh_42 said:
These accidents with people driving into rivers actually occur. And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
I'm aware the accidents occur/the problem exists. My concern was that the statement seems extremely broad/absolute.

In my view, much of the current/recent problem is caused by a sort of "uncanny valley" for decision-making where the humans don't correctly understand the limitations of the automation or their (the humans') role in the algorithm.
 
  • #43
fresh_42 said:
I oppose automatic decisions.
I think you probably only oppose some decisions. If the 737 Max autopilot decided to turn itself off and return control to the pilot, I'm guessing you would approve. That's an automatic decision too. If my car decided to invoke the anti-skid features, or decides that I'm drunk and it refuses to start, those are automatic decisions.

As @russ_watters said, if your thermostat makes the decision that you need more heat would you oppose that decision?

There is a continuum of decisions. It's very hard to define a general line between what we approve and disapprove.

It sounds like Russ and I will agree that it would be a mistake to allow dual safety standards, one for AI and another for non-AI automation.
 
  • #44
anorlunda said:
I think you probably only oppose some decisions. If the 737 Max autopilot decided to turn itself off and return control to the pilot, I'm guessing you would approve. That's an automatic decision too. If my car decided to invoke the anti-skid features, or decides that I'm drunk and it refuses to start, those are automatic decisions...

It sounds like Russ and I will agree that it would be a mistake to allow dual safety standards, one for AI and another for non-AI automation.
I would suggest that bad decisions by badly written algorithms (like the 737 Max MCAS) should be excluded because any machine can be badly designed, with a fatal flaw, and that doesn't really have anything to do with automation. I don't see a fundamental difference between a badly designed algorithm failing and crashing the plane and an improperly designed or installed control linkage breaking and crashing the plane.

In my opinion, the problem with advanced automation is that the boundary between human and computer control is not always clear to the users or they don't like where it is and try to override the automation...or they simply don't care about the risk of giving the automation more responsibility than it is designed to have. Several people have been killed by self-driving cars for that reason, for example.

...and for GPS for a car, the responsibility really isn't any different from a paper map. A GPS can give more/better information, but it bears little or no responsibility for the final decisions, and doesn't execute them itself. As far as I can judge, GPS directions aren't automation at all.
 
Last edited:
  • #45
russ_watters said:
In my opinion, the problem with advanced automation is that the boundary between human and computer control is not always clear
We're in agreement. But we need to resist attempts to define that as an AI related problem. The public is easily convinced that "advanced" and "AI" are synonymous.

If AI helps Boeing sell airplanes, they'll hang the tag on everything they market. But if AI is seen as evil, they'll simply deny that their devices contain AI.
 
  • Like
Likes russ_watters
  • #46
russ_watters said:
What makes you think you are any different from that ball?
I understand what effect a hill has on me (and even on others).
russ_watters said:
What kind of/complexity of patterns? Does a PID controller on a thermostat qualify? It learns the responsiveness of your HVAC system and adjusts the timing of the start/stop to avoid undershoot and overshoot.
If it works as a system with feedback (like the flyball governor), I don't think of it as "identifying a pattern". Being intelligent means to identify a pattern that no one pointed out to you first. The example I like is a computer program that is fed scientific papers and then spits out a new, unknown, scientific theory. Another example: if a computer in a car was taught how to drive in America and then you would put it in England and it would figure out by itself that it must drive on the other side of the road; That would be intelligent. One that is not, would stubbornly try to stay on the same side, maybe just being really good at maneuvering to avoid crashes.

That is why I agree with others when they say "artificial intelligence" is thrown around lightly. Most machines defined as AI mostly mimics cognitive functions with some complex feedback mechanisms. With my self-driving car example applied to today's machines, most likely that it would have extra information that says "When positioned within the limits of England, switch sides." That's feedback. This is why @russ_watters refers to "badly written algorithms", which means the decision process lies with the programmer who feed the causes and effects he knows and the computer program never gets out of these definitions. That's feedback. AI would find either new causes or new effects, or would be given a new cause and deduce the effect based on its knowledge.
 
  • #47
fresh_42 said:
I am strongly against decision making by machines, It causes people to drive into rivers because GPS tells them to do.

That isn't decision making by machines. It's (stupid) decision making by people who are (stupidly) relying too much on information from a machine.

Now if the faulty GPS were an input to the control program of a self-driving car, which then drove into a river based on the GPS info, that would be decision making by machines. And I don't think machines should be allowed to make decisions of that kind until they get reliable enough not to do things like that.
 
  • Like
Likes russ_watters and phinds
  • #48
PeterDonis said:
That isn't decision making by machines. It's (stupid) decision making by people who are (stupidly) relying too much on information from a machine.
Indeed. But if not even this works, how much less will it work with more sophisticated programs. It wasn't meant as an example of AI, rather than an example of the notoriously bad interaction. An example where AI failed is, that fb ads frequently suggest that I should consult companies which help me with my tax problems as an American who lives in Europe. Sure, that isn't an actual issue, but it demonstrates the non existing reliability.
 
  • #49
fresh_42 said:
if not even this works, how much less will it work with more sophisticated programs

I agree that, before we even think about using a particular program as input to an AI decision maker, we should first make sure it doesn't give obviously stupid input or produce obviously stupid results with human decision makers. And a lot of hype surrounding AI ignores the obvious fact that programs which fail that simple test are ubiquitous, and programs which can pass it are rare.
 
  • #50
It is not the normal which is risky, it are the exceptions - like my online behavior which doesn't match the standard, or in case of the 737 the low height or whatever it was. And there is still my argument "it keeps consultancies busy". (I had at least four AI ads recently from SAS.)
 

Similar threads

Replies
2
Views
3K
Replies
1
Views
2K
Back
Top