- #1
José Ricardo
- 92
- 5
How much time the machine can overcome the human? I was thinking about around 50 years? But I am not sure. I would like that you guys can take me this doubt.
Our current AI likely can already parse this. I bet Siri might not be stumped by the analogy. The question is: by what criteria do you decide whether is has understood it? Probably by whether its response makes sense.José Ricardo said:I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
DaveC426913 said:Our current AI likely can already parse this. I bet Siri might not be stumped by the analogy. The question is: by what criteria do you decide whether is has understood it? Probably by whether its response makes sense.
The problem is, it's a continuum. Every example requires a different amount of conceptualizing horsepower.
You might want to read upon The Turing Test. Essentially, it is a test that assumes there's no way to specifically quantify when a given machine has been successful at simulating a human. All you can do is try to stump it, and set a time limit on how long it has to keep up without the tester being able to tell.
I mean, how would you know it "understands"? Talk to it for a minute? An hour? A year?
Machines can be programmed to respond sensibly, as long as the programmer can foresee these questions.José Ricardo said:Hi @anorlunda,
I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
Not a fact!José Ricardo said:I didn't know about this fact of Siri, Dave. Thanks!
Most of the chess engines are conventional programs. The main exception is Alpha Zero, which does use a form of neural network, first beating the best human players of "go". A different version using two neural networks (from what I understand), beat Stockfish version 8, which is already better than humans, but Stockfish 8 was limited to 1 move per minute, not letting it manage it's time better, and without the opening and end game table bases it normally used, and the current version Stockfish 9 was not used in the computer match.anorlunda said:For example, machines already beat humans in chess and other games.
Of all things to have trouble with, why would a computer have trouble with pre-programmed behavior?rcgldr said:I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed"
I'm not sure what kind of definition of Artificial Intelligence would include machines that do not have software.anorlunda said:I don't hold with narrow definitions of AI. I think our machines have effectively been doing AI in the broad sense since long before we had computers and software.
Here's the basic definition in the Wikipedia page:DaveC426913 said:I'm not sure what kind of definition of Artificial Intelligence would include machines that do not have software.
That's quite a stretch for the term "intelligence".russ_watters said:Here's the basic definition in the Wikipedia page:
"...any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals."
That sounds like a mechanical thermostat to me.
Agreed; That's exactly what my opinion on the subject is. I think it is a term used mostly to generate headlines, and has no useful meaning. But I would welcome being shown a useful definition/application for the term.DaveC426913 said:That's quite a stretch for the term intelligence.
A thermostat is, at its core, simply a bimetallic coil that expands and contracts with temperature.
If that qualifies as artificial intelligence, then the term has no useful meaning.
Don't be silly. If computers matching human-level intelligence is the definition you choose, so be it, but please recognize that that isn't anything special. Mechanical computers were doing that a hundred plus years ago. https://en.wikipedia.org/wiki/Difference_engineSo now we need a new term for a device that can match human-level intelligence in behavior. AI is out. How about AI-ii?
Maybe so in the media, but that shouldn't be where the bar is set. Those who want to discuss artificial intelligence should have a useful working definition.russ_watters said:Agreed; That's exactly what my opinion on the subject is. I think it is a term used mostly to generate headlines, and has no useful meaning.
Of course I was being facetious. That was more directed at anorlunda's idea of the folly of eliminating the distinction between mere mechanical aptitude and bona fide intelligence.russ_watters said:Don't be silly.
I thought of the Difference Engine, but didn't mention it as I didn't expect anyone to actually consider it as an example of intelligence.russ_watters said:Mechanical computers were doing that a hundred plus years ago. https://en.wikipedia.org/wiki/Difference_engine
I completely disagree.russ_watters said:The entire point of a computer - mechanical or digital - is that it exceeds human intelligence. We wouldn't use them otherwise!
rcgldr said:I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed"
I wasn't considering AI that would include both pre-programed and learned behavior. In that case, I'm not sure how much of the intelligence of a cricket or roach is pre-programmed versus learned, but I do recall duplicating the intelligence of small insects like that was at least at one time considered an early goal for AI, so it's not clear to me why that was considered as an early goal for AI. It might have been related to how to react to a situation never encountered before, if it's a problem solving process (possibly based on the nearest similar experience) as opposed to a pre-programmed response. I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.DaveC426913 said:Of all things to have trouble with, why would a computer have trouble with pre-programmed behavior? That's the easy part. The hard part is learned behavior and adaptation.
rcgldr said:I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
Agreed! So can you provide one?DaveC426913 said:Maybe so in the media, but that shouldn't be where the bar is set. Those who want to discuss artificial intelligence should have a useful working definition.
I don't think moving the bar to a bad place proves it was previously in a bad place.Of course I was being facetious. That was more directed at anorlunda's idea of the folly of eliminating the distinction between mere mechanical aptitude and bona fide intelligence.
Fair enough. Older thermostats could be tuned - but by humans - to adjust the rate or overshoot of their temperature control. New thermostats learn the response of the system and adjust their operation to maximize their ability to maintain the setpoint. That fits your definition of learning, doesn't it?I stand by the hallmark of intelligence as the ability to adapt (that is, self-adapt) to new situations. The DI cannot do that at all.
I thought speed was everything? We use machines to automate physical tasks that we could theoretically do ourselves, but don't because we don't feel like it or it would take more effort than we are willing to put in. We could lift a ton by human power if we want, but we choose not to because it is faster and easier to use a crane - and a crane is more reliable. We use computers to do the same thing for mental tasks: they are faster and we can do other things with our time instead.I completely disagree.
The DI, for example doesn't exceed human intelligence at all. It is doing basic arithmetic - nothing more than a (very) patient Grade 2 student could do using a pencil and a (very) long sheet of foolscap.
Fair enough -- but you'll need to work out a definition that excludes that component because as we've seen from the OP, many people seem to believe that that's a key part of the definition.The point of (current) computers is that they do mundane things very fast. I see virtually no intelligence in there...
Well that's an interesting take. You're basically saying that a hallmark of "intelligence" is reduced precision/accuracy, aren't you? Do people really want that in a computer? In my opinion, there is no such thing as a "blurred set of competing goals". There is only indecisiveness....excepting software that is so sophisticated it is capable of making decisions between a blurred set of competing goals (if they're not blurry, then straight mathematical calculations with unambiguous outcomes would suffice - i.e what mainstream computers of today do. But they're starting to teach them fuzzy logic).
rcgldr said:I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
Agreed! I'm forgetful, emotional, slow-witted. Why is it a useful goal to try to emulate my brain? Why is AI being researched if that's all it is? I thought the point of machines and computers was to do things humans do better than humans do them (or just so we don't have to bother with doing them)?anorlunda said:There is no reason to make that a goal if the purpose is useful AI.
No it doesn't. I don't know why you would think that.anorlunda said:I'll give a reason why I don't like narrow AI definitions. A narrow definition presupposes a particular approach to the solution,
I think the Turing Test will have to suffice as the Gold standard for now. Though, as I'm about to point out, intelligence is a continuum, with the TT near top end.russ_watters said:Agreed! So can you provide one?
Yes.russ_watters said:Fair enough. Older thermostats could be tuned - but by humans - to adjust the rate or overshoot of their temperature control. New thermostats learn the response of the system and adjust their operation to maximize their ability to maintain the setpoint. That fits your definition of learning, doesn't it?
Yes. None of which involves intelligence.russ_watters said:I thought speed was everything? We use machines to automate physical tasks that we could theoretically do ourselves, but don't because we don't feel like it or it would take more effort than we are willing to put in. We could lift a ton by human power if we want, but we choose not to because it is faster and easier to use a crane - and a crane is more reliable. We use computers to do the same thing for mental tasks: they are faster and we can do other things with our time instead.
Which goes back to the media being a low bar to look to for a good definition. We don't have to accept it.russ_watters said:Fair enough -- but you'll need to work out a definition that excludes that component because as we've seen from the OP, many people seem to believe that that's a key part of the definition.
Right. The intelligence would be for it to do things for you without you having to wet-nurse it.russ_watters said:And me, personally, regardless of what we call it, I want a computer that can quickly do things I can't or don't feel like it. I don't need or necessarily even want it to mimic me to do that.
No. The input has reduced precision - in its parameters. The AI's purpose is to figure what I want without me having to explicitly tell it.russ_watters said:Well that's an interesting take. You're basically saying that a hallmark of "intelligence" is reduced precision/accuracy, aren't you?
My internet radio (Jango) plays me what I tell it to. But after a while, it starts figuring out what I might like, without me specifically telling it what I like.russ_watters said:Do people really want that in a computer? In my opinion, there is no such thing as a "blurred set of competing goals". There is only indecisiveness.
It depends largely who is "seeing" it. Check out the old ELIZA program (https://en.wikipedia.org/wiki/ELIZA).gleem said:Will we know what intelligence in a machine is when we see it?
rcgldr said:I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
Perhaps it would be better stated as "problem solving" versus "thinking". That old article may have been focused on neural networks as opposed to AI in general.anorlunda said:There is no reason to make that a goal if the purpose is useful AI.
Although I first saw this mentioned in an old article, doing a web search for AI insects or AI cockroaches still produces quite a few current hits, so at least a sub-field of AI is trying to duplicate the problem solving capabilities of some insects (cockroaches, worms, ...). As I mentioned before, some of this involves pre-programmed behaviors versus learned behaviors.rcgldr said:insects - cockroaches
Svein said:It depends largely who is "seeing" it. Check out the old ELIZA program (https://en.wikipedia.org/wiki/ELIZA).
Svein said:It depends largely who is "seeing" it. Check out the old ELIZA program (https://en.wikipedia.org/wiki/ELIZA).
One of my more ambitious programming attempts on my little C=64.Svein said:It depends largely who is "seeing" it. Check out the old ELIZA program (https://en.wikipedia.org/wiki/ELIZA).
gleem said:@José Ricardo, If you are interested in AI , how it could impact the future of mankind, and why all of us should think about what we want for our future with AI read Max Tegmark's book "Life 3.0: Being Human in the Age of Artificial Intelligence"
There is no definitive answer to this question as it depends on various factors such as advancements in technology, research, and development. Some experts predict that AI could surpass human intelligence in the next few decades, while others believe it may take much longer.
There are both positive and negative consequences that could arise from AI surpassing human intelligence. On the positive side, AI could help us solve complex problems, improve efficiency, and enhance our quality of life. However, there are also concerns about job displacement, ethical issues, and the potential for AI to become uncontrollable.
It is difficult to say for sure whether humans can prevent AI from surpassing us. Some experts believe that we can regulate and limit the development of AI to prevent it from becoming too powerful. Others argue that once AI reaches a certain level of intelligence, it may be impossible to control or stop its advancement.
AI and human intelligence are fundamentally different. AI is based on algorithms and data, while human intelligence is based on emotions, creativity, and consciousness. AI can process and analyze vast amounts of data at a much faster rate than humans, but it lacks the ability to think critically and make complex decisions based on morality and emotions.
Despite advancements in AI technology, there are still many limitations that prevent it from surpassing human intelligence. AI lacks common sense, creativity, and emotional intelligence, which are essential for many tasks. It also struggles with understanding context and making ethical decisions. Additionally, AI is only as good as the data it is trained on, and biases in the data can lead to biased outcomes.