Insights How AI Will Revolutionize STEM and Society

Click For Summary
The discussion centers on the impact of AI on STEM fields, with participants expressing a range of views on its implications in labs, classrooms, and industries. Many participants highlight the speculative nature of AI's future, questioning its true capabilities and the validity of labeling certain technologies as AI. Concerns are raised about the potential misuse of AI and the risks associated with allowing machines to make decisions without human oversight. The dialogue also touches on the blurred lines between AI, machine learning, and traditional algorithms, with some arguing that the term "artificial intelligence" is often misapplied. Participants emphasize the importance of defining AI clearly to avoid confusion and ensure responsible implementation. The conversation reflects a cautious optimism about AI's potential as a tool while acknowledging the need for careful consideration of its limitations and ethical implications.
  • #31
fresh_42 said:
... Cortana informed me where she (?) put my alerts into. As if that was an "I". If I knew how, I would deinstall that rubbish.
I also find Cortana incredibly obnoxious. I got rid of it as soon as I installed Win10 but that's been several years now and I don't recall what all I did. I do know I found instruction on the internet. We are not the only ones who hate it.
 
Technology news on Phys.org
  • #32
jack action said:
No matter what is the answer to those questions, The question that was asked was:«How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?»

One point I raised in my answer is the fact that a lot of people want to use AI (or whatever you want to call that technology) to make final decisions, without any human supervision (like driving a car or maybe even make societal choices). If we do go down that road (should we allow it?), we will have to redefine what responsibilities one individual has towards another one or even the group.

You must have an opinion about that.
There was this book " Weapons of Math Destruction" based on that, on decisions made by machines bypassing or ignoring human judgement and the risks associated with it.
 
  • #33
So @Greg Bernhardt , when will these be published?

I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device. It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
1593431786791.png


My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

If we have technology providing benefits on one hand, risks on the second hands, and displacing human workers on the third hand, what difference does it make if that technology deserves the label AI?

IMO debate over what AI really means is a needless distraction. IMO claims that AI is a discrete approach or "quantum leap" are false. Comparisons to human intelligence and the magic of cognition are not helpful. We will have continuous spectrum of machine abilities, and methods to achieve those abilities.
 
  • Like
Likes 256bits, Averagesupernova, russ_watters and 1 other person
  • #34
anorlunda said:
So @Greg Bernhardt , when will these be published?
First part tonight or tomorrow!
 
  • Like
Likes anorlunda
  • #35
fresh_42 said:
IMO AI should be defined as capable to lie on purpose.
I wrote some AI for myself:
Code:
main( ) {
        printf("You are smart and attractive. Everybody likes you.\n");
}
 
  • Haha
Likes Astronuc, phinds and anorlunda
  • #36
anorlunda said:
It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
I wouldn't go that far. Otherwise, one could say that when a ball sitting at the top of a hill starts rolling down, it "decided" to roll down. It didn't. It just reacts to changes in its environment.

To me, intelligence is the capacity to identify patterns. I believed we can make machines that can do this, although I'm not sure how far we have gone into that domain, yet. The intelligence behind the flyball governor doesn't comes from the machine itself, it came from someone who saw a pattern between an object in rotation and the forces it created.

What people are afraid from AI comes from the fact that they link "intelligence" to "being alive". But there are no direct links between the two. Being alive is mostly about being able to replicate yourself. That relates to the concept of autonomy, the capacity to function independently. An entity doesn't need intelligence to replicate itself. And an intelligent machine that was designed to find patterns in, say, scientific papers, will not "evolve" to replicate itself.

Even if we assume that an intelligent machine will evolve to replicate itself - and we are really far from that - some people worry that the machines will go on to destroy humans. But that is just a very pessimistic assumption. They are plenty of life forms on this planet and none of them have a goal of destroying other life forms. And from what we understand, diversity is important for survival and there are only disadvantages when it comes to destroying other life forms. Why would a new intelligent life form (designed by us) comes to a different conclusion?
 
  • #37
jack action said:
To me, intelligence is the capacity to identify patterns.
Therein is the problem with public discussions. Unless the discussion is preceded by rigid and boring definitions of common words, each participant uses his/her personal favorite definition and we wind up talking past each other. For example, define "decide" for all possible relevant contexts. We have something like an uncanny valley that poisons public discussions of AI.
 
  • Like
Likes 256bits and russ_watters
  • #38
anorlunda said:
I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device. It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
1593431786791.png
1593431786791.png


My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

This reminds me of a story I heard from an aquacultural engineer I know who makes/installs mostly automated water systems (they call them life support systems) for various kinds of fish facilities.
They had done an install in a Central or South American shrimp farm and went back after a year or so to check up on how it was working out (maintenance contract possibly).
They found a guy sitting on a chair in the pump room (where most of the equipment is). He was there because one of the automated valves had stopped working. He would manually control the valve went appropriate. This was a cheaper solution (his salary) than getting the valve fixed at the time.
My friend say he was a very loyal employee.

Maybe that's Artificial Machine Intelligence (AMI)!

Since I don't know a lot about details of AI function and definitions, I also took an approach that I would address AI simply as increased machine smarts and proceeded from there.
 
  • Haha
Likes anorlunda
  • #39
fresh_42 said:
Not really. The question is: What makes AI different from all classical algorithms? Is it a new property or a only a sophisticated algorithm?
anorlunda said:
I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device...

My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

If we have technology providing benefits on one hand, risks on the second hands, and displacing human workers on the third hand, what difference does it make if that technology deserves the label AI?

IMO debate over what AI really means is a needless distraction. IMO claims that AI is a discrete approach or "quantum leap" are false. Comparisons to human intelligence and the magic of cognition are not helpful. We will have continuous spectrum of machine abilities, and methods to achieve those abilities.
There's a decent chance you guys will like my response. I share the concern/annoyance about definitions and the pop-culture feel of the term, so I just drove over it and declared a broad definition to answer the question.

Although:
fresh_42 said:
...I am strongly against decision making by machines, It causes people to drive into rivers because GPS tells them to do.
That's a provocative statement. You do own/use a thermostat, don't you?

But otherwise I agree with the sentiment; the level of decision making we leave to machines can become problematic if it isn't done carefully, or if the machines or people aren't prepared for the shift in responsibility.
 
  • #40
jack action said:
I wouldn't go that far. Otherwise, one could say that when a ball sitting at the top of a hill starts rolling down, it "decided" to roll down. It didn't. It just reacts to changes in its environment.
What makes you think you are any different from that ball?
To me, intelligence is the capacity to identify patterns. I believed we can make machines that can do this, although I'm not sure how far we have gone into that domain, yet.
What kind of/complexity of patterns? Does a PID controller on a thermostat qualify? It learns the responsiveness of your HVAC system and adjusts the timing of the start/stop to avoid undershoot and overshoot.
 
  • #41
russ_watters said:
That's a provocative statement. You do own/use a thermostat, don't you?
These accidents with people driving into rivers actually occur. And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
 
  • #42
fresh_42 said:
These accidents with people driving into rivers actually occur. And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
I'm aware the accidents occur/the problem exists. My concern was that the statement seems extremely broad/absolute.

In my view, much of the current/recent problem is caused by a sort of "uncanny valley" for decision-making where the humans don't correctly understand the limitations of the automation or their (the humans') role in the algorithm.
 
  • #43
fresh_42 said:
I oppose automatic decisions.
I think you probably only oppose some decisions. If the 737 Max autopilot decided to turn itself off and return control to the pilot, I'm guessing you would approve. That's an automatic decision too. If my car decided to invoke the anti-skid features, or decides that I'm drunk and it refuses to start, those are automatic decisions.

As @russ_watters said, if your thermostat makes the decision that you need more heat would you oppose that decision?

There is a continuum of decisions. It's very hard to define a general line between what we approve and disapprove.

It sounds like Russ and I will agree that it would be a mistake to allow dual safety standards, one for AI and another for non-AI automation.
 
  • #44
anorlunda said:
I think you probably only oppose some decisions. If the 737 Max autopilot decided to turn itself off and return control to the pilot, I'm guessing you would approve. That's an automatic decision too. If my car decided to invoke the anti-skid features, or decides that I'm drunk and it refuses to start, those are automatic decisions...

It sounds like Russ and I will agree that it would be a mistake to allow dual safety standards, one for AI and another for non-AI automation.
I would suggest that bad decisions by badly written algorithms (like the 737 Max MCAS) should be excluded because any machine can be badly designed, with a fatal flaw, and that doesn't really have anything to do with automation. I don't see a fundamental difference between a badly designed algorithm failing and crashing the plane and an improperly designed or installed control linkage breaking and crashing the plane.

In my opinion, the problem with advanced automation is that the boundary between human and computer control is not always clear to the users or they don't like where it is and try to override the automation...or they simply don't care about the risk of giving the automation more responsibility than it is designed to have. Several people have been killed by self-driving cars for that reason, for example.

...and for GPS for a car, the responsibility really isn't any different from a paper map. A GPS can give more/better information, but it bears little or no responsibility for the final decisions, and doesn't execute them itself. As far as I can judge, GPS directions aren't automation at all.
 
Last edited:
  • #45
russ_watters said:
In my opinion, the problem with advanced automation is that the boundary between human and computer control is not always clear
We're in agreement. But we need to resist attempts to define that as an AI related problem. The public is easily convinced that "advanced" and "AI" are synonymous.

If AI helps Boeing sell airplanes, they'll hang the tag on everything they market. But if AI is seen as evil, they'll simply deny that their devices contain AI.
 
  • Like
Likes russ_watters
  • #46
russ_watters said:
What makes you think you are any different from that ball?
I understand what effect a hill has on me (and even on others).
russ_watters said:
What kind of/complexity of patterns? Does a PID controller on a thermostat qualify? It learns the responsiveness of your HVAC system and adjusts the timing of the start/stop to avoid undershoot and overshoot.
If it works as a system with feedback (like the flyball governor), I don't think of it as "identifying a pattern". Being intelligent means to identify a pattern that no one pointed out to you first. The example I like is a computer program that is fed scientific papers and then spits out a new, unknown, scientific theory. Another example: if a computer in a car was taught how to drive in America and then you would put it in England and it would figure out by itself that it must drive on the other side of the road; That would be intelligent. One that is not, would stubbornly try to stay on the same side, maybe just being really good at maneuvering to avoid crashes.

That is why I agree with others when they say "artificial intelligence" is thrown around lightly. Most machines defined as AI mostly mimics cognitive functions with some complex feedback mechanisms. With my self-driving car example applied to today's machines, most likely that it would have extra information that says "When positioned within the limits of England, switch sides." That's feedback. This is why @russ_watters refers to "badly written algorithms", which means the decision process lies with the programmer who feed the causes and effects he knows and the computer program never gets out of these definitions. That's feedback. AI would find either new causes or new effects, or would be given a new cause and deduce the effect based on its knowledge.
 
  • #47
fresh_42 said:
I am strongly against decision making by machines, It causes people to drive into rivers because GPS tells them to do.

That isn't decision making by machines. It's (stupid) decision making by people who are (stupidly) relying too much on information from a machine.

Now if the faulty GPS were an input to the control program of a self-driving car, which then drove into a river based on the GPS info, that would be decision making by machines. And I don't think machines should be allowed to make decisions of that kind until they get reliable enough not to do things like that.
 
  • Like
Likes russ_watters and phinds
  • #48
PeterDonis said:
That isn't decision making by machines. It's (stupid) decision making by people who are (stupidly) relying too much on information from a machine.
Indeed. But if not even this works, how much less will it work with more sophisticated programs. It wasn't meant as an example of AI, rather than an example of the notoriously bad interaction. An example where AI failed is, that fb ads frequently suggest that I should consult companies which help me with my tax problems as an American who lives in Europe. Sure, that isn't an actual issue, but it demonstrates the non existing reliability.
 
  • #49
fresh_42 said:
if not even this works, how much less will it work with more sophisticated programs

I agree that, before we even think about using a particular program as input to an AI decision maker, we should first make sure it doesn't give obviously stupid input or produce obviously stupid results with human decision makers. And a lot of hype surrounding AI ignores the obvious fact that programs which fail that simple test are ubiquitous, and programs which can pass it are rare.
 
  • #50
It is not the normal which is risky, it are the exceptions - like my online behavior which doesn't match the standard, or in case of the 737 the low height or whatever it was. And there is still my argument "it keeps consultancies busy". (I had at least four AI ads recently from SAS.)
 
  • #51
jack action said:
I understand what effect a hill has on me (and even on others).
What relevance does that have in the context of this thread?
If it works as a system with feedback (like the flyball governor), I don't think of it as "identifying a pattern".
No, the flyball governor is Proportional control only. A specific rpm yields a specific throttle position, and that's it. A PID controller learns the responsiveness of the feedback system and adjusts the outputs accordingly. For example, your thermostat will turn on or off before it senses a temperature change because it remembers how long it took the last few times. And it will adjust as that delay adjusts; it will turn on sooner and off later on a hot day than a cooler day.
Being intelligent means to identify a pattern that no one pointed out to you first.
It appears to me the thermostat qualifies by that definition.
Another example: if a computer in a car was taught how to drive in America and then you would put it in England and it would figure out by itself that it must drive on the other side of the road; That would be intelligent.
By that definition, an awful lot of humans aren't intelligent. Most are told ahead of time that they have to drive on the left side of the road and some still mess it up.
That is why I agree with others when they say "artificial intelligence" is thrown around lightly. Most machines defined as AI mostly mimics cognitive functions with some complex feedback mechanisms.
How do you know you don't?
This is why @russ_watters refers to "badly written algorithms", which means the decision process lies with the programmer who feed the causes and effects he knows and the computer program never gets out of these definitions. That's feedback. AI would find either new causes or new effects, or would be given a new cause and deduce the effect based on its knowledge.
I have a lot of bad habits I can't seem to break. I suppose that means I lack some intelligence, but at the same time I'd be ok with blaming my programmer for not writing them better.
 
  • #52
russ_watters said:
What relevance does that have in the context of this thread?
It is the basis for "making a decision". It implies you understand that an effect has a cause and that you can act differently according to it. The flyball governor has no knowledge of what is happening. It just get pushed around by other objects, some that may have the intelligence of doing it with a goal of obtaining a certain effect.
russ_watters said:
It appears to me the thermostat qualifies by that definition.
I never said it wasn't. But if you think it qualifies according to my definition, then it is AI for me as well.

I'm no expert on AI, but maybe it could be defined as "advance feedback". Maybe the line is blurry where feedback ends and AI begins. But in the end, saying that a flyball governor is the same as AI seems exaggerated to me. It's sounds like saying a boiler sitting on a fire is the same as a nuclear power plant. The end results might be similar, but the processes are totally different.
russ_watters said:
By that definition, an awful lot of humans aren't intelligent. Most are told ahead of time that they have to drive on the left side of the road and some still mess it up.
I know you understand what I mean, and that wasn't it. Any human, as stupid as it could be - without being told - will realize at one point that everybody does what he or she used to do, the only difference being that they are on the other side of the road. And that it is easier to switch side rather than fight his or her way.

Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.
russ_watters said:
How do you know you don't?
There is a joke going around where AI is defined as a series of nested IF...ELSE statements. That is not AI because the program does exactly what it was initially told to do. But with the computing power that is available today, a program may evaluate so many conditional statements that for us mere humans it looks like intelligence. But it still isn't, it just mimics it.
russ_watters said:
I have a lot of bad habits I can't seem to break. I suppose that means I lack some intelligence, but at the same time I'd be ok with blaming my programmer for not writing them better.
But when you will have a machine with AI, it may make decisions completely unforeseen by the programmer. You can see that as owning a dog. If the dog does something that is unwanted, who is responsible? The breeder, the trainer, the owner or the dog itself? There may not be a single answer that fit all possible cases.
 
  • #53
jack action said:
It is the basis for "making a decision"...

...I know you understand what I mean, and that wasn't it. Any human, as stupid as it could be - without being told - will realize at one point that everybody does what he or she used to do, the only difference being that they are on the other side of the road. And that it is easier to switch side rather than fight his or her way.

Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.

There is a joke going around where AI is defined as a series of nested IF...ELSE statements. That is not AI because the program does exactly what it was initially told to do. But with the computing power that is available today, a program may evaluate so many conditional statements that for us mere humans it looks like intelligence. But it still isn't, it just mimics it.

But when you will have a machine with AI, it may make decisions completely unforeseen by the programmer. You can see that as owning a dog. If the dog does something that is unwanted, who is responsible? The breeder, the trainer, the owner or the dog itself? There may not be a single answer that fit all possible cases.
So again: how do you know that you don't?

I'm not being glib with that question, I really mean it/would like an answer. How do you know that you aren't just acting on a an extremely complex set of if/then/else statements based on an extremely complex set of inputs? Even worse, how can you be sure that what differentiates you from a computer isn't that you suck at it?

People, dogs and gnats are unpredictable largely because they suck at being logical, not because they are intelligent. Maybe part of the definition issue is that it isn't "intelligence" people think of when they think of AI, but artificial life. They want something that feels real, even if that actually makes the system inferior in many ways. If that's what people think of and want, fine, but that's not what I'd be after.
 
  • Like
Likes gleem
  • #54
jack action said:
The flyball governor has no knowledge of what is happening.
Neither does a neural network.

If the issue is what machines can do and should do, what is the utility of a narrow definition of AI?

Pattern recognition is a method. Our policies should be method independent. If not, they can be obsoleted overnight.
 
  • Like
Likes russ_watters
  • #55
jack action said:
Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.
FYI, regarding language:
http://news.mit.edu/2018/machines-learn-language-human-interaction-1031

I don't think that example is of learning a language from scratch, but rather learning the use of language in everyday life. We're always going to build language and intelligence programs into computers simply because it's easier that way. It takes decades to teach a human to be functional -- why would we want that in a computer when you can copy and paste? But in this example, the computer is learning to be more conversational in the way it speaks. It starts as mimicking (like a parrot or a human would), but here's a telling part in the article:
A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,”
The robot starts out speaking properly, but learns by mimicking humans to speak poorly. So again, the defining characteristic that it lacks is imprecision. It needs to learn how to be bad at speaking to be more human!

Is this really what we want out of AI?
 
  • #56
russ_watters said:
How do you know that you aren't just acting on a an extremely complex set of if/then/else statements based on an extremely complex set of inputs?
Like I said, maybe one could define intelligence as "advanced feedback" and there is no clear line where one begins and where one ends. How do you know that you are a human and not an ape or a tree? Apparently, there's enough differences that we created different words for them.

That is a problem that arises from the fact that humans love to classified things, where such thing doesn't really exists in nature.
russ_watters said:
Even worse, how can you be sure that what differentiates you from a computer isn't that you suck at it?

People, dogs and gnats are unpredictable largely because they suck at being logical, not because they are intelligent.
I don't agree with the use of "suck at it". My definition of intelligence is "the ability to identify patterns". I prefer saying some can be better at it then others. Humans are most likely the best at it. But if I compare a cheetah with a human, I say that its ability to run is better, not that the human suck at running. The latter implied that the human is not a worthy living being because he can't run as well as the best of the living being. I don't want to imagine what one would think of a fish with that kind of thinking ...
russ_watters said:
even if that actually makes the system inferior in many ways.
russ_watters said:
The robot starts out speaking properly, but learns by mimicking humans to speak poorly. So again, the defining characteristic that it lacks is imprecision. It needs to learn how to be bad at speaking to be more human!
Here you have a lot of judgements about AI which doesn't affect its definition. How do you define "inferior"? What is "speaking properly" and "speaking poorly"? I don't think you'll find scientific laws to define this.
russ_watters said:
Is this really what we want out of AI?
That is a good reflection for the present segment of "Ask the advisors". I can't wait to read people's opinions on this.
russ_watters said:
It takes decades to teach a human to be functional -- why would we want that in a computer when you can copy and paste?
If you look at AI as a machine that can do what I can do, it just does it instead of me, that view is logical. But AI can be much more and that's where I find it interesting. If we assume that a cure for cancer exists, humans most likely can find it by looking for patterns that link causes and effects. It may take decades, maybe centuries. But with AI, it will most likely go faster. AI is the power tool to find patterns. Sure, you can use AI to do simplistic tasks - like you can use an electric drill to make a hole in a piece of paper - but I find this to simply be a waste of resources for amusement or just say "I do it because I can."
anorlunda said:
If the issue is what machines can do and should do, what is the utility of a narrow definition of AI?

Pattern recognition is a method. Our policies should be method independent. If not, they can be obsoleted overnight.
By using the word "policies", I think you are worrying about what type of decisions we should leave to machines. If so, I do share some concerns about this and I did share them in my answer to @Greg Bernhardt 's question. Short answer: I fear that people will trust blindly a machine's decision and not their own. That applies to AI or flyball governors, but the more a machine "look" smart, the easier it is to go down that path.

But I feel @Greg Bernhardt 's question was broader than this subject alone.
 
  • #57
I have suppressed my urge to post anything about AI since I believe intelligence as well as artificial intelligence both have rather obscure definitions.
-
I will say what no one else has said in this thread so far (I think): Human intelligence and some animals display this as well, have an ability to say "What if..." A very eccentric engineer once told me that humans compared to machines can solve problems with significantly missing data. In other words, we say "What if..."
 
  • Like
Likes nsaspook and BillTre
  • #58
Averagesupernova said:
I have suppressed my urge to post anything about AI since I believe intelligence as well as artificial intelligence both have rather obscure definitions.
-
I will say what no one else has said in this thread so far (I think): Human intelligence and some animals display this as well, have an ability to say "What if..." A very eccentric engineer once told me that humans compared to machines can solve problems with significantly missing data. In other words, we say "What if..."
So do you believe that AI is forever doomed to not be able to do extrapolation? I don't.
 
  • #59
phinds said:
So do you believe that AI is forever doomed to not be able to do extrapolation? I don't.
No, I can't say for sure that I believe that. There are systems out there now that are programmed to ignore inputs that are outside normal limits. But they are basic simple systems such as an engine control module that realizes a bogus reading from an oxygen sensor and simply meters fuel in based on a preloaded table. It notifies the driver that there is a problem and it ends there. It does not troubleshoot to find a defective sensor, broken wire, connection, etc. If it did, it would have a limited number of scenarios to troubleshoot for. A human would notice wiring that does not look original (for instance) and suspect someone at one time made an improper repair or something of this nature. It takes humans the better part of a lifetime of learning by experience. Doing a job based on looking something up in a troubleshooting guide is no different than a set of if-then instruction in a computer. It will never come close to human experience, which in my opinion is required for good extrapolation.
 
  • #60
Averagesupernova said:
A human would
Why compare it with humans at all? Is it because of your understanding of the word intelligence?
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
10
Views
4K
Replies
36
Views
14K