Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Featured Advances in Machine Intelligence

  1. Aug 6, 2016 #1
    I am an undergraduate student pursuing computer science in the Southwestern United States (I just switched my major to comp. sci actually). Recently, I came across an individual who claimed to research AI professionally and who expressed to me the following view after some discussion of various technologies and the rate of advancement of such tech (this took place on a seperate online forum). Here is what he said:

    "I can sympathize with your point of view, but from the point of view of an actual AI researcher like myself, you have it exactly backwards. In reality, all the amazing new stuff like self-driving cars and Go AIs are things that are horribly old hat. The machine learning techniques they are based on date from the 1980s, for Turing's sake. While it may seem to the layperson that these technologies emerged from nowhere, to people in the field they have been long expected and in fact have been disappointingly slow.

    It was really a big company with a lot of money like Google throwing real money behind the field that has allowed it to advance so quickly in the public eye, but from a theoretical perspective this isn't anything new. Only a company like Google has the budget to put together all the GPUs and CPUs AlphaGo is composed of and pay programmers familiar with Go to work on it for years just for a PR stunt -- but in reality the methods for AlphaGo are decades old and could have been done long ago. Same deal with all the resources needed for the self-driving car. So from my perspective, the exact opposite has been happening: slower and slower scientific progress, punctuated occasionally by amazing engineering stunts from big companies with a lot of money."

    After reading this individual's response, I have to admit I am doubtful. Would you guys say there is any significant veracity to this person's view? Beyond the fact that apparently these "old" machine learning techniques were pioneered in the 80's, would you say this individual writes with accuracy?

    Would you necessarily have a definitive response to such an individual, for or against this view?

    I would very much love to read what anyone has to say regarding this, and would also greatly appreciate where you think machine intelligence will be in the next 10 year span (speculation is absolutely acceptable). I hope to resolve my thinking on this matter.
  2. jcsd
  3. Aug 6, 2016 #2

    Simon Bridge

    User Avatar
    Science Advisor
    Homework Helper

    An opinion is an opinion: what makes you think he is not being sincere?

    I suspect that scientific progress has always seemed like that to people in the field: that "current" progress is slow and incremental compared with progress in the past. It's a bit like how the "end of days" prophesies always seem to be just about to come true.
    What evidence has been offered by this person to show that the observation made is special to "these days"?

    Can you provide a link to the original discussion?
  4. Aug 6, 2016 #3


    User Avatar
    Gold Member

    How old is certain technology?
    Sorting thoughts and techniques date back ages way before the 1980's. The personal computer only made it aware to the interested general public.
    Look at where the common Quick sort, or Merge sort had their debut.

    The Turing test dates back to the 1950's.
    ( Descartes wrote about AI in his time, in regards to whether a machine can be made to think or just imitate. IIs that the middle 1600's)

    One quibble I have with what the person wrote is that he says what he knows, all happened in the 1980's. To me that is well beyond belief.
    Technology builds upon what is there before. It can progress in leaps and bounds, or crawl at a snail's pace awaiting the next major breakthrough which may never happen.
  5. Aug 6, 2016 #4
    Sure. Here is the link: https://forums.spacebattles.com/threads/automated-trucks-will-only-cost-1-8-m-jobs.410816/

    It's a thread discussing the potential for the loss of truck driving jobs to autonomous vehicles, referencing a Vox article. The discussion starts around page 3 of the thread I think. The forum name is "Spacebattles.com".
  6. Aug 6, 2016 #5


    User Avatar
    Science Advisor
    Gold Member

    I can see nothing more but stating the obvious (and of course true), here. It is really the case that a layperson, cannot see the stages of advancements in detail and maybe sometimes cannot see these stages at all, thinking that something came out of nothing. But it is not his/her job to do that. A scientist in whichever field and in this particular case in AI, has to know a lot of the details and nuances going on. But in my opinion, the important thing is to try to see why a big company like Google, as it is referred in the OP, invested on something the way it did and when it did it. And this inevitably leads to think about the advancements in IT and telecommunications industry, especially in the past 15 years or so, that is going really by leaps and bounds. The first thing is that computing machines became a commodity. Their operation became a very easy thing. New materials and scientific progress, led to very cheap and small hardware. Software development became an almost routine process. Great speeds on the net and tons of data, became accessible to everyone. The IT market became huge and with great opportunities, for individuals and companies alike. It was inevitable that tons of data and information will gather and the timing and well grounds for investing on technologies and commodities on this, had come. So, a scientific idea that had been kept at its infancy for many years or not particularly developed anyway, had its opportunity to make it to the market. And the rules are made by the market. Good investments are (reasonably enough) aiming at great revenues. Had it not be the case that some fundamental things became widely spread, the landscape would be totally different.

    So, it is very reasonable that big companies made and make the investments they do, the way and with the timing they do. On the other hand, it is equally reasonable that "the time comes" for certain ideas and technologies from the past, to make it into the IT market. And as a whole, taken in a statistical manner, their rate of development is slow, but that dictates the market. Funding and investments by big commercial companies, cannot be made on things that won't create revenue. Of course, at national level there is good funding for many scientific endeavors in some countries, but this must be somewhat picky too, as funding is by and large given through taxes.

    Although some predictions can be safely enough made for the foreseen future, there is a multitude of factors, that can influence the whole thing, so we can potentially see things in the future that we cannot foresee now. But again, in my opinion, the market will essentially make the rules. This thing although healthy, it is not always a flawless process or the best it could be.
  7. Aug 6, 2016 #6


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    My two cents:
    I think he is overstating the capabilities of AI in the 1980's. The fundamental ideas of the 80s are still valid, but I think that research in neural networks, pattern recognition, distributed control, etc. was not very advanced. But it is hard to judge because, in my opinion, everyone and his brother was jumping on board and overselling their work. Then their results had to run on 1980's computers and were not very impressive. So much of AI depends on the efficiency of the algorithms. It's hard to separate improvements in algorithms from the massive increases in computer power. I don't know if they could have even considered some of the approaches that are realistic today.

    PS. I should add that I am very impressed with what I am seeing and hearing now regarding the self-driving cars.
    Last edited: Aug 6, 2016
  8. Aug 7, 2016 #7
    I'd say it's pretty accurate. I've been around for the whole thing although I didn't get into AI-neural networks (NN) until the 90's. Of course, just as with the Alpha Go thing, there were always new revolutionary advances in NN technology coming along all the time. There was backpropagation, simulating annealing, recursive networks, "Fuzzy" logic, holographic memory in crystals, chaotic K-sets, etc, etc. I actually discussed this in another thread and the point is that, after a while, one tends to become disillusioned in these things, but you keep trudging along anyway.

    You know a revolution has come about when writers in the field start talking about the old technology in a certain way. It typically begins with, "In the old days researchers thought that things worked like this and that...but now we know that things work like that and this...etc." We haven't seen that with NN technology. When there's nothing new under the sun writers talk about "significant advances" which are, at best, "evolutionary" steps that really don't further the field much at all in the long run, although they may generate a lot of hype in the short run. I think that's what we're seeing here. For example, right now one of the big hypes is "deep learning" which may prove to go somewhere, but from what I can garner it's still fundamentally based on the tired-old backpropagation technique we have been using since the 80's..

  9. Apr 29, 2017 #8
    For a look into at least one possible AI future, I recommend Ray Kurzweil's books.
  10. Apr 30, 2017 #9
    I'm not familiar with the self-driving car applications, but AlphaGo is a combination of deep learning and reinforcement learning. I can't say much about the history of reinforcement learning, but deep learning has been around for a long time. It's artificial neural network research rebranded.

    Quoting from the intro chapter of Deep Learning by prominent DL researchers Goodfellow, Bengio and Courville (free draft available here http://www.deeplearningbook.org/)

    I believe this is what the person the OP talked to was referring to. It isn't AI theory that has made progress over the decades. It is computer hardware that has. It is only with today's hardware that we realize the ideas from the 80's are were actually viable. That and "big data" world we live in today is starting to give us enough sample size to build larger and larger networks. So it feels like the AI researchers in the 80's made huge strides but then had to take a break for a few decades for the rest of the world to catch up before it became practical. The practical neural nets these days are the same feedforward networks with parameters learned with the classic backprop. The biggest difference in recent years is that the preferred hidden unit activation function is now the rectifier rather than tanh, solving the vanishing and exploding gradient issues that can occur with backprop.

    Though I don't want to discount the AI research in between the 80's wave and today's wave of deep learning research. The statistical learning theory and graphical modeling stuff created in the interim is very much an important part of a data scientist's toolbox and is used in a lot of applied research. AI research just goes in waves of fads. In 10 years, who knows, what the next buzzword will be. Maybe kernel methods will get rebranded as something else and be hip again haha
  11. May 27, 2017 #10
    I would agree with this estimation of things. Since I posted this thread back in August of 2016, I've come a long way in understanding the history of AI and machine learning work through my own independent study. For technical studies, I bought the physical copy of the deep learning book by Goodfellow et al. (as well as the linear algebra book by Shilov and the probability theory text by Jaynes), and have been working through it alongside my python book.

    I would say, however, that AlphaGo's success is particularly amazing and that the continuing work Deepmind is doing with that specific system is important. After they improved it, I think they let it play against the world's currently number 1 ranked player in a 3-game match and it managed to win all three (though I think I remember it nearly lost the first game? It took place very recently so I have to look into it more). Regardless, I wonder how long it will be before a general algorithm or set of algorithms will be developed. From what I know now, it seems hopelessly difficult (technically) but somehow still fairly close (like within the next 50+ years or so)
  12. May 29, 2017 #11
    I've worked with supervised machine learning in IT security for proprietary document matching and event prediction, but it's always cumbersome to perform the initial training and tuning for each type of data; I still prefer frequency and standard deviation techniques for anomaly detection. Like you said, what they're doing here with unsupervised learning still seems a ways out before I could give it all my data and have it figure out what's important, but it's still super cool.

  13. May 29, 2017 #12
    Just Google's image recognition is something to behold. The other day I was wondering what a type of plant was in my yard. I took a photo of it on m phone, uploaded it to Google's image recognition app and it told me what plant it was. The plant was a "Brunnera macrophylla 'Jack Frost'". That capability is stunning to me and I doubt possible in the 80s.
  14. May 29, 2017 #13


    User Avatar

    Staff: Mentor

    There are two intertwined trends here. On the one hand we have the algorithmic and theoretical development of AI, and on the other we have the cost and capability of the computing hardware. For decades, the first trend was running ahead of the second; for example, people knew how to go about building a program that could in principle play grandmaster-level chess long before it was reasonable to actually do it.

    Thus, as the capabilities of the computing platforms advance (and it's not just Moore's Law getting more out of each piece of silicon, but also much more effective distributed and parallel processing bringing many pieces of silicon together) more and more of the things that we "always knew how to do" suddenly start happening. That's how we can see huge advances out of seemingly nowhere, even while a specialist in the field can feel that all that's happening is consolidation and confirmation of old stuff.

    Eventually however, problems that resist solution by the currently known techniques will start to appear; likely we're already aware of them but haven't yet recognized that they pose theoretical challenges that cannot be overcome by brute strength. When this happens the pendulum will swing back the other way. I find it interesting that the best computer programs for chess, bridge, and go use very different approaches; it seems unlikely that we've discovered all the promising approaches to machine problem solving.
    Last edited: May 29, 2017
  15. May 29, 2017 #14


    User Avatar
    Science Advisor

    Yes, most of the technology is from the 1980s. However, it required the vision and perseverance of those who understood its promise despite the "disappointingly slow" progress to stick it out and show that its promise could be realised, before Microsoft, Google, Facebook etc started their more recent investments.

    A Brief Overview of Deep Learning
    Ilya Sutskever
  16. May 29, 2017 #15


    User Avatar
    Science Advisor

  17. May 29, 2017 #16


    User Avatar
    Science Advisor

    Because the technology is old, many of the limitations have also been anticipated, eg. simple manipulations with intergers :)

  18. May 30, 2017 #17


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    In the 1980's you could have submitted your question to "Gardeners' Question Time", a much loved BBC radio programme.
  19. May 30, 2017 #18


    User Avatar
    Staff Emeritus
    Science Advisor

    I'm not in the field at all but IMO if tools become available that allow you to test and develop old ideas in new and interesting ways then the field is progressing. It's not like new research is just repeating the same experiments as were done in the 80s, even if the fundamentals are the same.
  20. May 30, 2017 #19
    Google's new Tensor Processing Unit certainly takes machine learning to a new level... Apple is apparently working on a new AI chip for mobile devices as well. I can't wait to see what the future holds for this fascinating technology!
  21. May 30, 2017 #20
    I actually agree with the argument that AI progress is far slower than people are giving it credit for. In particular, I doubt fully autonomous self driving cars are even remotely close to being deployed, and AlphaGo, while important, has been immensely overhyped.

    Your perspective probably depends upon how impressed you are by these two mainstream, highly publicized applications. If you take a sober, conservative view of alphago and self driving, you probably perceive progress as far slower than laypeople do.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted