Advances in Machine Intelligence

In summary, the individual claimed to be an AI researcher and stated that the recent advancements in AI, such as self-driving cars and Go AIs, are actually based on old techniques from the 1980s. They believe that these advancements have been long expected and have been disappointingly slow in the field. They also mention that big companies with significant budgets, like Google, have allowed for quick progress in the public eye, but from a theoretical perspective, this is not new. The individual also expresses doubt about the "old" techniques being used and asks for opinions on the future of machine intelligence.
  • #1
AaronK
35
14
I am an undergraduate student pursuing computer science in the Southwestern United States (I just switched my major to comp. sci actually). Recently, I came across an individual who claimed to research AI professionally and who expressed to me the following view after some discussion of various technologies and the rate of advancement of such tech (this took place on a separate online forum). Here is what he said:

"I can sympathize with your point of view, but from the point of view of an actual AI researcher like myself, you have it exactly backwards. In reality, all the amazing new stuff like self-driving cars and Go AIs are things that are horribly old hat. The machine learning techniques they are based on date from the 1980s, for Turing's sake. While it may seem to the layperson that these technologies emerged from nowhere, to people in the field they have been long expected and in fact have been disappointingly slow.

It was really a big company with a lot of money like Google throwing real money behind the field that has allowed it to advance so quickly in the public eye, but from a theoretical perspective this isn't anything new. Only a company like Google has the budget to put together all the GPUs and CPUs AlphaGo is composed of and pay programmers familiar with Go to work on it for years just for a PR stunt -- but in reality the methods for AlphaGo are decades old and could have been done long ago. Same deal with all the resources needed for the self-driving car. So from my perspective, the exact opposite has been happening: slower and slower scientific progress, punctuated occasionally by amazing engineering stunts from big companies with a lot of money."

After reading this individual's response, I have to admit I am doubtful. Would you guys say there is any significant veracity to this person's view? Beyond the fact that apparently these "old" machine learning techniques were pioneered in the 80's, would you say this individual writes with accuracy?

Would you necessarily have a definitive response to such an individual, for or against this view?

I would very much love to read what anyone has to say regarding this, and would also greatly appreciate where you think machine intelligence will be in the next 10 year span (speculation is absolutely acceptable). I hope to resolve my thinking on this matter.
 
Computer science news on Phys.org
  • #2
An opinion is an opinion: what makes you think he is not being sincere?

I suspect that scientific progress has always seemed like that to people in the field: that "current" progress is slow and incremental compared with progress in the past. It's a bit like how the "end of days" prophesies always seem to be just about to come true.
What evidence has been offered by this person to show that the observation made is special to "these days"?

Can you provide a link to the original discussion?
 
  • #3
AaronK said:
I hope to resolve my thinking on this matter.
How old is certain technology?
Sorting thoughts and techniques date back ages way before the 1980's. The personal computer only made it aware to the interested general public.
http://www.computerscijournal.org/pdf/vol7no3/vol7no3_369-376.pdf
Look at where the common Quick sort, or Merge sort had their debut.

The Turing test dates back to the 1950's.
https://en.wikipedia.org/wiki/Turing_test
( Descartes wrote about AI in his time, in regards to whether a machine can be made to think or just imitate. IIs that the middle 1600's)

One quibble I have with what the person wrote is that he says what he knows, all happened in the 1980's. To me that is well beyond belief.
Technology builds upon what is there before. It can progress in leaps and bounds, or crawl at a snail's pace awaiting the next major breakthrough which may never happen.
 
  • #4
Simon Bridge said:
An opinion is an opinion: what makes you think he is not being sincere?

I suspect that scientific progress has always seemed like that to people in the field: that "current" progress is slow and incremental compared with progress in the past. It's a bit like how the "end of days" prophesies always seem to be just about to come true.
What evidence has been offered by this person to show that the observation made is special to "these days"?

Can you provide a link to the original discussion?

Sure. Here is the link: https://forums.spacebattles.com/threads/automated-trucks-will-only-cost-1-8-m-jobs.410816/

It's a thread discussing the potential for the loss of truck driving jobs to autonomous vehicles, referencing a Vox article. The discussion starts around page 3 of the thread I think. The forum name is "Spacebattles.com".
 
  • #5
AaronK said:
I am an undergraduate student pursuing computer science in the Southwestern United States (I just switched my major to comp. sci actually). Recently, I came across an individual who claimed to research AI professionally and who expressed to me the following view after some discussion of various technologies and the rate of advancement of such tech (this took place on a separate online forum). Here is what he said:

"I can sympathize with your point of view, but from the point of view of an actual AI researcher like myself, you have it exactly backwards. In reality, all the amazing new stuff like self-driving cars and Go AIs are things that are horribly old hat. The machine learning techniques they are based on date from the 1980s, for Turing's sake. While it may seem to the layperson that these technologies emerged from nowhere, to people in the field they have been long expected and in fact have been disappointingly slow.

It was really a big company with a lot of money like Google throwing real money behind the field that has allowed it to advance so quickly in the public eye, but from a theoretical perspective this isn't anything new. Only a company like Google has the budget to put together all the GPUs and CPUs AlphaGo is composed of and pay programmers familiar with Go to work on it for years just for a PR stunt -- but in reality the methods for AlphaGo are decades old and could have been done long ago. Same deal with all the resources needed for the self-driving car. So from my perspective, the exact opposite has been happening: slower and slower scientific progress, punctuated occasionally by amazing engineering stunts from big companies with a lot of money."

After reading this individual's response, I have to admit I am doubtful. Would you guys say there is any significant veracity to this person's view? Beyond the fact that apparently these "old" machine learning techniques were pioneered in the 80's, would you say this individual writes with accuracy?

Would you necessarily have a definitive response to such an individual, for or against this view?

I would very much love to read what anyone has to say regarding this, and would also greatly appreciate where you think machine intelligence will be in the next 10 year span (speculation is absolutely acceptable). I hope to resolve my thinking on this matter.

I can see nothing more but stating the obvious (and of course true), here. It is really the case that a layperson, cannot see the stages of advancements in detail and maybe sometimes cannot see these stages at all, thinking that something came out of nothing. But it is not his/her job to do that. A scientist in whichever field and in this particular case in AI, has to know a lot of the details and nuances going on. But in my opinion, the important thing is to try to see why a big company like Google, as it is referred in the OP, invested on something the way it did and when it did it. And this inevitably leads to think about the advancements in IT and telecommunications industry, especially in the past 15 years or so, that is going really by leaps and bounds. The first thing is that computing machines became a commodity. Their operation became a very easy thing. New materials and scientific progress, led to very cheap and small hardware. Software development became an almost routine process. Great speeds on the net and tons of data, became accessible to everyone. The IT market became huge and with great opportunities, for individuals and companies alike. It was inevitable that tons of data and information will gather and the timing and well grounds for investing on technologies and commodities on this, had come. So, a scientific idea that had been kept at its infancy for many years or not particularly developed anyway, had its opportunity to make it to the market. And the rules are made by the market. Good investments are (reasonably enough) aiming at great revenues. Had it not be the case that some fundamental things became widely spread, the landscape would be totally different.

So, it is very reasonable that big companies made and make the investments they do, the way and with the timing they do. On the other hand, it is equally reasonable that "the time comes" for certain ideas and technologies from the past, to make it into the IT market. And as a whole, taken in a statistical manner, their rate of development is slow, but that dictates the market. Funding and investments by big commercial companies, cannot be made on things that won't create revenue. Of course, at national level there is good funding for many scientific endeavors in some countries, but this must be somewhat picky too, as funding is by and large given through taxes.

Although some predictions can be safely enough made for the foreseen future, there is a multitude of factors, that can influence the whole thing, so we can potentially see things in the future that we cannot foresee now. But again, in my opinion, the market will essentially make the rules. This thing although healthy, it is not always a flawless process or the best it could be.
 
  • #6
My two cents:
I think he is overstating the capabilities of AI in the 1980's. The fundamental ideas of the 80s are still valid, but I think that research in neural networks, pattern recognition, distributed control, etc. was not very advanced. But it is hard to judge because, in my opinion, everyone and his brother was jumping on board and overselling their work. Then their results had to run on 1980's computers and were not very impressive. So much of AI depends on the efficiency of the algorithms. It's hard to separate improvements in algorithms from the massive increases in computer power. I don't know if they could have even considered some of the approaches that are realistic today.

PS. I should add that I am very impressed with what I am seeing and hearing now regarding the self-driving cars.
 
Last edited:
  • #7
AaronK said:
Beyond the fact that apparently these "old" machine learning techniques were pioneered in the 80's, would you say this individual writes with accuracy?

I'd say it's pretty accurate. I've been around for the whole thing although I didn't get into AI-neural networks (NN) until the 90's. Of course, just as with the Alpha Go thing, there were always new revolutionary advances in NN technology coming along all the time. There was backpropagation, simulating annealing, recursive networks, "Fuzzy" logic, holographic memory in crystals, chaotic K-sets, etc, etc. I actually discussed this in another thread and the point is that, after a while, one tends to become disillusioned in these things, but you keep trudging along anyway.

You know a revolution has come about when writers in the field start talking about the old technology in a certain way. It typically begins with, "In the old days researchers thought that things worked like this and that...but now we know that things work like that and this...etc." We haven't seen that with NN technology. When there's nothing new under the sun writers talk about "significant advances" which are, at best, "evolutionary" steps that really don't further the field much at all in the long run, although they may generate a lot of hype in the short run. I think that's what we're seeing here. For example, right now one of the big hypes is "deep learning" which may prove to go somewhere, but from what I can garner it's still fundamentally based on the tired-old backpropagation technique we have been using since the 80's..

 
  • Like
Likes Merlin3189, Auto-Didact, Demystifier and 1 other person
  • #8
For a look into at least one possible AI future, I recommend Ray Kurzweil's books.
 
  • #9
I'm not familiar with the self-driving car applications, but AlphaGo is a combination of deep learning and reinforcement learning. I can't say much about the history of reinforcement learning, but deep learning has been around for a long time. It's artificial neural network research rebranded.

Quoting from the intro chapter of Deep Learning by prominent DL researchers Goodfellow, Bengio and Courville (free draft available here http://www.deeplearningbook.org/)

At this point in time, deep networks were generally believed to be very difficult
to train. We now know that algorithms that have existed since the 1980s work
quite well, but this was not apparent circa 2006. The issue is perhaps simply that
these algorithms were too computationally costly to allow much experimentation
with the hardware available at the time.

I believe this is what the person the OP talked to was referring to. It isn't AI theory that has made progress over the decades. It is computer hardware that has. It is only with today's hardware that we realize the ideas from the 80's are were actually viable. That and "big data" world we live in today is starting to give us enough sample size to build larger and larger networks. So it feels like the AI researchers in the 80's made huge strides but then had to take a break for a few decades for the rest of the world to catch up before it became practical. The practical neural nets these days are the same feedforward networks with parameters learned with the classic backprop. The biggest difference in recent years is that the preferred hidden unit activation function is now the rectifier rather than tanh, solving the vanishing and exploding gradient issues that can occur with backprop.

Though I don't want to discount the AI research in between the 80's wave and today's wave of deep learning research. The statistical learning theory and graphical modeling stuff created in the interim is very much an important part of a data scientist's toolbox and is used in a lot of applied research. AI research just goes in waves of fads. In 10 years, who knows, what the next buzzword will be. Maybe kernel methods will get rebranded as something else and be hip again haha
 
  • Like
Likes atyy
  • #10
onoturtle said:
I'm not familiar with the self-driving car applications, but AlphaGo is a combination of deep learning and reinforcement learning. I can't say much about the history of reinforcement learning, but deep learning has been around for a long time. It's artificial neural network research rebranded.

Quoting from the intro chapter of Deep Learning by prominent DL researchers Goodfellow, Bengio and Courville (free draft available here http://www.deeplearningbook.org/)
I believe this is what the person the OP talked to was referring to. It isn't AI theory that has made progress over the decades. It is computer hardware that has. It is only with today's hardware that we realize the ideas from the 80's are were actually viable. That and "big data" world we live in today is starting to give us enough sample size to build larger and larger networks. So it feels like the AI researchers in the 80's made huge strides but then had to take a break for a few decades for the rest of the world to catch up before it became practical. The practical neural nets these days are the same feedforward networks with parameters learned with the classic backprop. The biggest difference in recent years is that the preferred hidden unit activation function is now the rectifier rather than tanh, solving the vanishing and exploding gradient issues that can occur with backprop.

Though I don't want to discount the AI research in between the 80's wave and today's wave of deep learning research. The statistical learning theory and graphical modeling stuff created in the interim is very much an important part of a data scientist's toolbox and is used in a lot of applied research. AI research just goes in waves of fads. In 10 years, who knows, what the next buzzword will be. Maybe kernel methods will get rebranded as something else and be hip again haha

I would agree with this estimation of things. Since I posted this thread back in August of 2016, I've come a long way in understanding the history of AI and machine learning work through my own independent study. For technical studies, I bought the physical copy of the deep learning book by Goodfellow et al. (as well as the linear algebra book by Shilov and the probability theory text by Jaynes), and have been working through it alongside my python book.

I would say, however, that AlphaGo's success is particularly amazing and that the continuing work Deepmind is doing with that specific system is important. After they improved it, I think they let it play against the world's currently number 1 ranked player in a 3-game match and it managed to win all three (though I think I remember it nearly lost the first game? It took place very recently so I have to look into it more). Regardless, I wonder how long it will be before a general algorithm or set of algorithms will be developed. From what I know now, it seems hopelessly difficult (technically) but somehow still fairly close (like within the next 50+ years or so)
 
  • Like
Likes atyy
  • #11
AaronK said:
I would say, however, that AlphaGo's success is particularly amazing and that the continuing work Deepmind is doing with that specific system is important. After they improved it, I think they let it play against the world's currently number 1 ranked player in a 3-game match and it managed to win all three (though I think I remember it nearly lost the first game? It took place very recently so I have to look into it more). Regardless, I wonder how long it will be before a general algorithm or set of algorithms will be developed. From what I know now, it seems hopelessly difficult (technically) but somehow still fairly close (like within the next 50+ years or so)
I've worked with supervised machine learning in IT security for proprietary document matching and event prediction, but it's always cumbersome to perform the initial training and tuning for each type of data; I still prefer frequency and standard deviation techniques for anomaly detection. Like you said, what they're doing here with unsupervised learning still seems a ways out before I could give it all my data and have it figure out what's important, but it's still super cool.

 
  • Like
Likes atyy
  • #12
Just Google's image recognition is something to behold. The other day I was wondering what a type of plant was in my yard. I took a photo of it on m phone, uploaded it to Google's image recognition app and it told me what plant it was. The plant was a "Brunnera macrophylla 'Jack Frost'". That capability is stunning to me and I doubt possible in the 80s.
 
  • Like
Likes QuantumQuest, stoomart, jerromyjon and 1 other person
  • #13
AaronK said:
[quoting a third-party source] "While it may seem to the layperson that these technologies emerged from nowhere, to people in the field they have been long expected and in fact have been disappointingly slow."
There are two intertwined trends here. On the one hand we have the algorithmic and theoretical development of AI, and on the other we have the cost and capability of the computing hardware. For decades, the first trend was running ahead of the second; for example, people knew how to go about building a program that could in principle play grandmaster-level chess long before it was reasonable to actually do it.

Thus, as the capabilities of the computing platforms advance (and it's not just Moore's Law getting more out of each piece of silicon, but also much more effective distributed and parallel processing bringing many pieces of silicon together) more and more of the things that we "always knew how to do" suddenly start happening. That's how we can see huge advances out of seemingly nowhere, even while a specialist in the field can feel that all that's happening is consolidation and confirmation of old stuff.

Eventually however, problems that resist solution by the currently known techniques will start to appear; likely we're already aware of them but haven't yet recognized that they pose theoretical challenges that cannot be overcome by brute strength. When this happens the pendulum will swing back the other way. I find it interesting that the best computer programs for chess, bridge, and go use very different approaches; it seems unlikely that we've discovered all the promising approaches to machine problem solving.
 
Last edited:
  • Like
Likes Merlin3189, stoomart, QuantumQuest and 3 others
  • #14
AaronK said:
The machine learning techniques they are based on date from the 1980s, for Turing's sake. While it may seem to the layperson that these technologies emerged from nowhere, to people in the field they have been long expected and in fact have been disappointingly slow.

It was really a big company with a lot of money like Google throwing real money behind the field that has allowed it to advance so quickly in the public eye, but from a theoretical perspective this isn't anything new.

Yes, most of the technology is from the 1980s. However, it required the vision and perseverance of those who understood its promise despite the "disappointingly slow" progress to stick it out and show that its promise could be realized, before Microsoft, Google, Facebook etc started their more recent investments.

A Brief Overview of Deep Learning
Ilya Sutskever
http://yyue.blogspot.sg/2015/01/a-brief-overview-of-deep-learning.html
 
  • #16
Nugatory said:
Eventually however, problems that resist solution by the currently known techniques will start to appear; likely we're already aware of them but haven't yet recognized that they pose theoretical challenges that cannot be overcome by brute strength. When this happens the pendulum will swing back the other way. I find it interesting that the best computer programs for chess, bridge, and go use very different approaches; it seems unlikely that we've discovered all the promising approaches to machine problem solving.

Because the technology is old, many of the limitations have also been anticipated, eg. simple manipulations with intergers :)

https://stanford.edu/~jlmcc/Presentations/PDPMathCogLecture2015/PDPApproachMathCogCogSci.pdf
 
  • #17
Greg Bernhardt said:
Just Google's image recognition is something to behold. The other day I was wondering what a type of plant was in my yard. I took a photo of it on m phone, uploaded it to Google's image recognition app and it told me what plant it was. The plant was a "Brunnera macrophylla 'Jack Frost'". That capability is stunning to me and I doubt possible in the 80s.

In the 1980's you could have submitted your question to "Gardeners' Question Time", a much loved BBC radio programme.
 
  • Like
Likes QuantumQuest, Greg Bernhardt, S.G. Janssens and 2 others
  • #18
I'm not in the field at all but IMO if tools become available that allow you to test and develop old ideas in new and interesting ways then the field is progressing. It's not like new research is just repeating the same experiments as were done in the 80s, even if the fundamentals are the same.
 
  • #19
Google's new Tensor Processing Unit certainly takes machine learning to a new level... Apple is apparently working on a new AI chip for mobile devices as well. I can't wait to see what the future holds for this fascinating technology!
 
  • #20
I actually agree with the argument that AI progress is far slower than people are giving it credit for. In particular, I doubt fully autonomous self driving cars are even remotely close to being deployed, and AlphaGo, while important, has been immensely overhyped.

Your perspective probably depends upon how impressed you are by these two mainstream, highly publicized applications. If you take a sober, conservative view of alphago and self driving, you probably perceive progress as far slower than laypeople do.
 
  • Like
Likes atyy
  • #21
Crass_Oscillator said:
I doubt fully autonomous self driving cars are even remotely close to being deployed, and AlphaGo, while important, has been immensely overhyped.
I would be interested in an explanation with greater detail.
 
  • #22
Sure, I'll just write some short points and you can ask questions about it. I'll stick to AlphaGo because less is known about what industry experts are doing with self driving cars, so for all I know they may possesses some magic I'm not aware of. The laziest thing I can do with SDC's is make an argument from authority, since a lot of academic experts have condemned the idea that we are anywhere near fully autonomous SDC's.

Regarding AlphaGo, the issues are:

DNN's are a very sloppy model, in the technical sense coined by Sethna (I can provide citations for the interested). In particular, it was found by Zhang et al (https://arxiv.org/pdf/1611.03530.pdf?from=timeline&isappinstalled=0) that DNN's, among other things, can achieve zero training error on randomly labeled or randomly generated data, pushing their generalization error arbitrarily high. To me this implies that DNN's have such enormous expressiveness that they can effectively memorize the dataset. With enough throughput and GPU toasters, you can span such an enormous portion of the Go gamespace that you can out muscle a human. Essentially it doesn't win via intelligence but via brutish Input/Output superiority that a human brain does not have access to. Consider the learning efficiency as a better measure (how many games must I win per game rank?). DeepMind is now moving on to the real time strategy computer game Starcraft which I think will illustrate this point very poignantly, since the data is much harder to handle. Moreover, they are much more carefully forcing I/O limitations on their "AI" algorithms so that I/O is properly normalized out.

All this said, clearly DNN's will have niche applications, it's just that they have been portrayed (largely by the media) in a highly misleading manner.
 
  • Like
Likes atyy, jerromyjon and Greg Bernhardt
  • #23
Crass_Oscillator said:
I actually agree with the argument that AI progress is far slower than people are giving it credit for.
Would you agree that progress was slow due to limited technology resources?

Your perspective probably depends upon how impressed you are by these two mainstream, highly publicized applications. If you take a sober, conservative view of alphago and self driving, you probably perceive progress as far slower than laypeople do.
My perspective is from practical use of machine learning in IT security since around 2012 for DLP purposes. Just recently I've been researching non-signature behavior-based antivirus, and as a person who supported Norton/Symantec stuff since Windows 95, I can say this new breed of analytics-driven security tools just wasn't possible very long ago.
 
  • Like
Likes atyy
  • #24
stoomart said:
Would you agree that progress was slow due to limited technology resources?My perspective is from practical use of machine learning in IT security since around 2012 for DLP purposes. Just recently I've been researching non-signature behavior-based antivirus, and as a person who supported Norton/Symantec stuff since Windows 95, I can say this new breed of analytics-driven security tools just wasn't possible very long ago.
I do agree that progress was impacted by hardware, but I also don't consider the theoretical progress to be all that impressive. It's important, but it is not revolutionary.

ImageNet being conquered by DNN's was not AI's equivalent of the transistor being invented or general relativity. From a mathematical and theoretical point of view I consider the invention of the FFT to be a much greater achievement. We don't even have theoretical clarity regarding the algorithm.

That said I know nothing about IT security, although I would start by guessing that people have gotten a lot farther with simple Bayesian methods than DNN's, which can also require a lot of horsepower. Is this correct, or are DNN's a big part of modern security software?
 
  • #25
Crass_Oscillator said:
I do agree that progress was impacted by hardware, but I also don't consider the theoretical progress to be all that impressive. It's important, but it is not revolutionary.

ImageNet being conquered by DNN's was not AI's equivalent of the transistor being invented or general relativity. From a mathematical and theoretical point of view I consider the invention of the FFT to be a much greater achievement. We don't even have theoretical clarity regarding the algorithm.

That said I know nothing about IT security, although I would start by guessing that people have gotten a lot farther with simple Bayesian methods than DNN's, which can also require a lot of horsepower. Is this correct, or are DNN's a big part of modern security software?
Security has always been done with brute force tactics like building massive signature and intelligence databases to detect viruses, spam, and malicious network traffic; I see this as analogous to programming Deep Blue to play chess. When attackers upped their game with polymorphic and memory-only code, now we need something that can profile behavior rather collecting signatures, which has only recently started to mature due to machine learning and modern hardware.

My assumption is mastering the practical use of DNNs will drive innovation to bigger and better ideas, which may even come from the AI we build. Edit: I think a good start would be to train DeepMind how to do maths the same way it learned to play Go and Atari games, through observation/feedback instead of programming.
 
Last edited:
  • #26
stoomart said:
I think a good start would be to train DeepMind how to do maths
This leads to the question of true intelligence... would DeepMind simply be able to mimic everything that is already known and understood by humans, or would it be able to "fill in the blanks" of what is yet undiscovered?

I'm firmly planted on @Crass_Oscillator 's side of the fence... there really isn't much more going on than brute force techniques with some simplification of data points to "compress" the data into workable volume. It still has great potential on it's own IMO, for example having an AI "supervisor" to learn common routines and point out human errors should be easily implemented, but I don't believe it would be able to "figure out" improvements to routines very well.
 
  • #27
jerromyjon said:
This leads to the question of true intelligence... would DeepMind simply be able to mimic everything that is already known and understood by humans, or would it be able to "fill in the blanks" of what is yet undiscovered?
I suggest intelligence is the ability to learn, adapt, and improve, which I believe this thing is clearly demonstrating. Mimicking would be its first step, and optimization is where it would go.
 
  • #28
stoomart said:
I suggest intelligence is the ability to learn, adapt, and improve, which I believe this thing is clearly demonstrating. Mimicking would be its first step, and optimization is where it would go.
Suppose I have two students. One student is a typical A/B American high school student and scores in the 87th percentile on the SAT (an entrance exam) after working a single practice test.

The second student is an A/B student who memorizes the patterns of 100 million SAT practice/old tests and scores in the 99th percentile.

Who is exhibiting more intelligence?
 
  • Like
Likes atyy
  • #29
Crass_Oscillator said:
Suppose I have two students. One student is a typical A/B American high school student and scores in the 87th percentile on the SAT (an entrance exam) after working a single practice test.

The second student is an A/B student who memorizes the patterns of 100 million SAT practice/old tests and scores in the 99th percentile.

Who is exhibiting more intelligence?

I would say the student with a life. : )
 
  • Like
Likes atyy
  • #30
Crass_Oscillator said:
Suppose I have two students. One student is a typical A/B American high school student and scores in the 87th percentile on the SAT (an entrance exam) after working a single practice test.

The second student is an A/B student who memorizes the patterns of 100 million SAT practice/old tests and scores in the 99th percentile.

Who is exhibiting more intelligence?

Under most educational systems the latter is the most intelligent. It's the test that's at fault; I guess that's your point.
 
  • Like
Likes atyy
  • #32
cosmik debris said:
Under most educational systems the latter is the most intelligent. It's the test that's at fault; I guess that's your point.

Our education system is pretty bad IMHO... lots of cramming in useless things.
It should focus on basics, then creative problem solving together.
 
  • Like
Likes stoomart
  • #33
Google is SkyNet. Kim Jong Un is John Connor, and will save the human race by EMPing Silicon Valley. Then Tencent from China will step in, and become the Matrix.
 
  • Like
Likes atyy
  • #34
Crass_Oscillator said:
I do agree that progress was impacted by hardware, but I also don't consider the theoretical progress to be all that impressive. It's important, but it is not revolutionary.

In general, in the case of machine intelligence, how important do you think it is for the AI field to have a revolutionary progress, as opposed to incremental progress of the sort you are speaking of?
 

Similar threads

  • Computing and Technology
Replies
1
Views
270
  • Computing and Technology
Replies
17
Views
1K
  • Computing and Technology
Replies
2
Views
1K
  • Computing and Technology
Replies
7
Views
2K
Replies
10
Views
2K
  • Computing and Technology
Replies
2
Views
638
Replies
9
Views
1K
  • General Discussion
Replies
10
Views
869
  • Computing and Technology
Replies
0
Views
182
  • Computing and Technology
Replies
9
Views
1K
Back
Top