Limits of Machine Learning?

  • Thread starter FallenApple
  • Start date
  • Featured
479
287
@Danko Nikolic:
I just reread your 2015 practopoiesis paper:
Regarding prediction #3, did you ever find a physiological mechanism underlying ideatheca? If not, Craddock et al. 2012 gives a specific physiological mechanism in the form of LTP-activated enzymes encoding information directly onto the neuronal cytoskeleton, i.e. CaMKII encoding information on microtubules (MTs).

Seeing after formation neuronal MTs remain stable i.e. don't depolymerize like non-neuronal MTs, information encoded on them would remain stable throughout adulthood, providing a means of stable long term memory formation which can last years or even a lifetime. Moreover, in the last few years it has become known that loss of neuronal cytoskeletal structure is associated with memory loss in Alzheimer's disease, leading even to experiments being carried out with MT stabilizing agents (taxanes, originally chemotherapeutic agents) in both Alzheimer mouse models and patients. For more information, see this recent review on the subject.
 

Dr. Courtney

Education Advisor
Insights Author
Gold Member
2018 Award
2,845
1,736
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average.
The teen with 60 hours of experience is also far from average.
 
The limit of machine learning is that it is still too restricted to certain kinds of problems. For example, we know how to solve optimization problems and we know how to solve classification and clustering problems. But humans classify, cluster, optimize and utilize far more advanced tricks than any algorithm is capable of performing, and they do it all day every day over decades.

In terms of machine learning, the brain is analogous to a complex system of deep spiking neural networks that possess recurrences and convolutions. These networks form functional modules but also communicate with other modules, a phenomenon that probably gives rise to the flexibility of our cognition and lets us "think outside the box", playing around with symbols and ideas in ways that would not otherwise be possible.

It is the ultimate goal of the machine learning program to develop such a flexible algorithm for learning, but I doubt it can ever be done without a complex system approach. Marvin Minsky warned of the deceptive idea to peek inside the brain and try to find a "mind" responsible for intelligence, when every component of the brain is itself unintelligent and the mind is just a holistic property of the system.
 
1,228
188
It is the ultimate goal of the machine learning program to develop such a flexible algorithm for learning, but I doubt it can ever be done without a complex system approach
But humans classify, cluster, optimize and utilize far more advanced tricks than any algorithm is capable of performing, and they do it all day every day over decades.
And with a minimal portion of the genetic code which does far more than just "execute commands", it also has a portion which encodes the machinery to construct and initiate it... I wonder what that adds up to in bits of DNA compared to source codes and data, not that it is like comparing apples to apples.
 

lavinia

Science Advisor
Gold Member
3,047
516
A couple of computer technologists that I know are skeptical about AI largely because machine learning often requires huge amounts of data. That said, they do believe that most tasks currently done by humans will someday be done by computerized machines. This will lead to a crisis of employment when human labor becomes obsolete. This does not mean that machine thinking will be like human thinking. But it does mean that individual tasks will be mechanized.

More broadly, one might ask what sort of machine the human brain is. And even broader than that, what sort of models of thinking are there? A nerve cell is just an on/off switch with a threshold trigger and is easily modeled on a computer. Also simple nervous systems - e.g. the nervous systems of some species of clams - have been completely modeled by finite state machines. This sort of consideration would suggest that the human mind is an extremely complex finite state machine. Some have suggested that the brain may also use quantum computing. Whether or not this is true, quantum computing seems to be another possible model.
 
Last edited:

FactChecker

Science Advisor
Gold Member
2018 Award
4,719
1,613
But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?
I think that you are seriously underestimating the variety and seriousness of the research being done. There are already symbolic logic manipulators and theorem provers in practical applications and in general use. There are other research efforts that manipulate relationships, looking for fundamental theorems. It is a misconception to think that the state of the art of machine learning is limited to data analysis.
 
From what I understand, machine learning is incredibly good at making predictions from data in a very automated/algorithmic way.
This is a meaningless generalization. If the data is under sampled, inaccurately labeled (which is most of the time), or complex (e.g. one sample is hundreds of gigabytes in size), or requires high accuracy,machine learning is an atrocious approach. The majority of problems have these downsides.

Also, the effectiveness depends not only on these generalizations, but also on the method and specific problem. Clustering is hugely inaccurate and heavily dependent upon human intervention ("What is a cluster? What is similarity?"), making it very vulnerable to the "high accuracy" weakness, since constructing the similarity model either requires a vast amount of data you don't have or very good human intuition. Classification can be much easier since it is not usually constrained in the same way.

Finally, machine learning does not make predictions from data in an automated/algorithmic way; it makes models, which require some form of assumptions, in an algorithmic/automated way, and these models make the predictions. This is more than a trite observation. Consider clustering. For typical methods (e.g. k-means), you are deciding what function determines similarity (in this case, d dimensional Euclidean distance). The assumption that d-dimensional Euclidean distance is usually nonsense, and is usually not checked in any meaningful way, from my experience.

The only advantage of ML is not prediction accuracy; it is automation. Thus, if I run a company or an experiment that generates large amounts of data, ML is a useful way to write programs to build models that use this data, some times updating automatically. However, you still have to figure out how to model the data. In principle you can build every part of the model directly from data; for instance, a neural network can be fitted to compute similarity instead of a Euclidean norm, and a different clustering algorithm can be used. The difficulty is that you will essentially never have the data or computational resources to do this; it's like trying to simulate an integrated circuit using density functional theory to model all of the electronics from atoms up (i.e. stupid). You have to truncate and make modeling assumptions somewhere; they even appear in how you label the data and train the NN.
 
1,228
188
So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?
Machine learning cannot "pull a rabbit out of a hat". There is no magic. A neural net has to be fine tuned to be able to make any reasonable associations from large training data sets. I don't know for sure but I don't think there are any large data sets of causal analysis for it to "learn" from, and even if there are they aren't like humans where they can take a data set and expand upon it to make sense of unfamiliar connections.
 

jim mcnamara

Mentor
3,436
1,626
This thread is diverging from AI and going into too much personal opinion. I am moving it to General Discussion. Why? because there are some good posts here mixed with less useful opinion. We do not need to throttle people for lack of a scientific poise, if the thread lives in GD.

Thread moved.
 

kith

Science Advisor
1,253
371
A couple of computer technologists that I know are skeptical about AI largely because machine learning often requires huge amounts of data.
There's a new version of AlphaGo which seems to use minimal input data. To quote from the article on the deepmind blog:
Deepmind said:
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play.
 
230
15
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
To be honest, I think there is some false equivalence here. Not that I can necessarily produce a more equitable sense myself., The very nature of "learning" is not, in my opinion, extremely dependent not only on the capability and "suitability" (i.e. a person without legs would struggle learning to "walk" in an extreme example) of the pupil, but also of the particulars of the subject and the manner of tuition.

A human teenager has an immense advantage because they already know that a cyclist up ahead is a cyclist and might move onto the road, as opposed to a lamppost that is less likely to do so, they have instinctive emotional reactions that can make their responses to slam on the brakes. Of course, autonomous vehicles have other advantages that humans do not have, but again, there is not a direct equivalence to this and I don't believe that comparisons of such lead to a 'fair' appraisal of human and machine learning.

By that kind of logic, one might argue that deer are much faster learners than humans just because a deer can walk minutes after birth.

There is a wealth more context and available information and 'technique' that humans learn and can apply whilst learning to drive, that isn't necesarily true of machine AI autonomous driving tech.
By the time a teenager starts learning to drive, they have already seen and understand traffic lights, they know not to stop in the middle of the motorway, they know that there are some roads they might not be able to drive down and they know the implications of mechanics such that driving faster introduces less control, more risk and longer stopping distances etc. all these factors and intuitions and preconcepts are built up over the course of the lives of the teenager, and are therefore already present before the 60 hour learning begins. The laws that make up highway codes have been developed over decades and other knowledge about the world has been passed down from generations from many varied sources.

Before a machine can learn to drive a car, it needs to learn the pattern recognition and the data needs to be amassed and collated and prepared in a format that can be utilised effectively. This also requires that the data retrieval aspects of the algorithms are developed AND EFFICIENT ENOUGH to work in the realtime scenario of driving a car.
Yes, computers process gajillions of teraflops a second, but the number of data points (in visual processing alone) to be processed is also incredibly high.

Humans have had BILLIONS of years of evlution to refine the image processing.


I've not used the best examples and I appreciate that there are some elements such as "it doesnt add much to a 60 hour learning time to learn that red = stop") but I hope the point is clear at least.
 
1,228
188
Humans have had BILLIONS of years of evlution to refine the image processing.
I guess if you assume bacteria had image processing abilities then just maybe those genes found their way into our DNA but that's a stretch...
but I hope the point is clear at least.
If there were sub-routines specifically devoted to threat detection and it was programmed effectively that would make the rest of the systems straight-forward at least.
 

Khashishi

Science Advisor
2,806
489
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
The human can transfer other knowledge accumulated during their lifetime to the task of driving. For example, what a pedestrian looks like, what color the sky is, how to walk. Now try to teach a newborn to drive in 60 hours.
 
1,228
188
Now try to teach a newborn to drive in 60 hours.
All I can think of is the Simpson's intro where Marge is driving and Maggie is in her car seat mimicking her every move. :-p
 
27,488
4,010
The human can transfer other knowledge accumulated during their lifetime to the task of driving.
That is a big part of what makes humans so good at learning, and computers not.

By that kind of logic, one might argue that deer are much faster learners than humans just because a deer can walk minutes after birth.
Interesting point! I wonder if deer actually learn to walk or if it is already hardwired in? I don’t know the answer to that, but in general it seems to me that humans do a lot more learning than other species.
 

Khashishi

Science Advisor
2,806
489
True. Less reliance on instincts is probably why humans are so good at learning new things. We have some remarkable neuroplasticity. If we hook up robot appendages to a baby, with proper connections, it could probably learn to walk on robot legs.
 
479
287

The title is a bit clickbaity, but the research mentioned seems truly remarkable or interesting to say the least; especially the possibility of democratization of ML from researchers to basically anyone for any task that automatic ML brings seems like a major revolution still waiting to happen.

@Danko Nikolic, might this be a precursor to your AI kindergarten? I did not read their paper so not sure if they refer to your papers, but their automatic ML reinforcement learning reminds me an awful lot of your practopoietic learning to learn explanation. If they haven't referenced you, they should or maybe you should give a Google Tech Talk.
 
479
287
Here we see the current limits on machine learning in action: Sofia vs Penrose!
 

BWV

413
298
I think you're missing the point of how machine learning is being increasingly done today. Many (perhaps most) machine learning tools today are not algorithmic in nature. They use neural networks configured in ways similar to the human brain, and then train these networks with learning sets, just like a human is trained to recognize patterns. Even the (human) designer of the neural network doesn't know how the machine will respond to a given situation. Given this, I don't see why these artificial neural networks cannot match or eventually exceed human capability. Indeed, I think Google's facial recognition software is already exceeding human capability. Granted this is in a controlled environment, but given time and increasing complexity of the networks (and increasing input from the environment), I think you will see these machines able to do anything a human mind can do.
Yes, but these techniques need a clear set of rules and definition of 'winning' to optimize against - like in Go, Chess or Poker (which was not a Deep Learning algorithm BTW). The 'rules' and definition of 'winning' for tasks like driving are much more nebulous and complex, let alone the optimization function for a general AI
 
From what I understand, machine learning is incredibly good at making predictions from data in a very automated/algorithmic way.

But for any inference that is going to deal with ideas of causality, it's primarily a subject matter concern, which relies on mostly on judgment calls and intuition.

So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?

Here's an example where there might be issues.

Say, an ML algorithm finds that low socioeconomic status is associated with diabetes with a significant p value. We clearly know that diabetes is a biological phenomena and that any possible(this is a big if) causal connection between a non biological variable such as low SES and diabetes must logically have intermediate steps between the two variables within the causal chain. It is these unknown intermediate steps that probably should be investigated in follow up studies. We logically know(or intuit from prior knowledge+domain knowledge) that low SES could lead to higher stress or unhealthy diet, which are biological. So a significant pval for SES indicates that maybe we should collect data on those missing variables, and then redo the analysis with those in the model.

But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?
Let me offer some opinions as someone that used AI early in the game. The human brain is perhaps the closest thing to infinite we've ever found. AI runs on a computer which isn't even as brilliant as a lizard. There are arguments among authorities of whether a brain works using binary logic or analog logic with me bending towards analog, We can speak of quantum computers but these are not really analog - they consist of logic of 0, maybe and 1. AI is really a method of bypassing a lot of hardware by using training algorithms. But this has the same limitations of a lot of hardware - it can only do what you train it to do. It can only use methodology you show it. It cannot actually invent anything of its own unless you train it in the method to do so. What we're speaking of here is teaching a computer how to be original. Since we cannot even tell ourselves how to be original beyond stupid things like "think outside the box" you're not going to achieve that.

Self driving cares will work fine if all of the human drivers around them obey the rules and the road designers do as well. There is NO CHANCE of that. I ride a bicycle for sport and believe me that I cannot ride one mile without observing people breaking driving laws in such a dangerous manner that it surprises me that we don't have 100 times the accident rate that we do. The fact that we don't is because other drivers are ALWAYS expecting poor driving and they are thinking further ahead than a self-driving car could do.

While waiting at a stop light, it changed to green and NO ONE MOVED. I wondered why and a car exiting the bridge came though the red light 5 full seconds after the light had changed at 60+ mph and accelerating. Because of the speed and the amount of time after the light changed there isn't a automated driving system in the world that could have predicted that. The speed of that pickup put it 150 yard out and out of practical range of the detectors on a self driving car. Put in better hardware? There are price limitations after all.

But there is even more - You are insured. If you get into a accident your insurance pays the bills if required. But with self driving cars the system manufacturer is liable. No insurance company in their right mind would insure such a company. This is like hanging out a sign and saying - "Million of dollars free for the taking".

So the single largest source of R&D money for AI research is essentially removed from the market and each company must assume the liabilities via their shareholders. I'll leave you to decide how successful that is going to be. Tesla has already ceased to call it an "autopilot" and now calls it a navigation device and you are required to have two hands on the wheel during the entire time that it is engaged. And I'll bet that there are now detectors built in to remove all liabilities from Tesla if you do take a hand off before a wreck.

Having a smart phone do simple tasks under AI might give those of the Star Wars generation visions of C3PO but it isn't and won't much improve.

Google's use of AI is little more than delivering news to you that they think you want - anti-Trump stuff only or pro-Trump stuff for the other side. And Google guards this data jealously because it that becomes too widely understood that they are essentially being manipulated there is no telling the price they will pay for that. Go to YouTube and play a Glenn Miller piece and you have almost the entire rest of the selections from the 30's to 50's. This isn't smart. This is in fact rather stupid. Playing a Huey Lewis and the News piece doesn't mean that you want to listen to everything else from the 70's.

Can AI and Deep Learning be improved? What couldn't be? But will that improvement evade the real limitations of machine intelligence? That is very highly doubtful.
 
I'm fascinated with the thought of analog computing returning. I have no business even entering this conversation with my level of knowledge, that said, it seems the potential for analog computing opens up avenues that are far more complex that straight binary. Arguments against analog that I've heard are that at the very base of physics lies the state of is, or is not. Binary. Yet it seems to me that synthesis and learning are more of an analog system. "Probably" is best described as a range or spectrum -analog. "Definitely" is singular and linear -binary.
I wonder if machine learning and AI will advance thru a further investigation of analog computing advances possibly crossed with binary. Much as the realization that non linear equations weren't junk, but much better described as 3 dimensional entities. Is it possible that simultaneous data running on multiple channels in an analog computer might experience resonance that could be a source of insight? Clock speeds and amplitudes could result in intersections of data that might prove profound. How one would configure a system like this is beyond me, but I have to wonder if the answer of ultimate machine learning can be found solely in a binary state. I don't believe there is a hard limit to machine learning. Humans have barely scratched the surface of the AI field, yet from my age perspective the advances are staggering. So. Carry on. All is as it should be.
 
I'm fascinated with the thought of analog computing returning. I have no business even entering this conversation with my level of knowledge, that said, it seems the potential for analog computing opens up avenues that are far more complex that straight binary. Arguments against analog that I've heard are that at the very base of physics lies the state of is, or is not. Binary. Yet it seems to me that synthesis and learning are more of an analog system. "Probably" is best described as a range or spectrum -analog. "Definitely" is singular and linear -binary.
I wonder if machine learning and AI will advance thru a further investigation of analog computing advances possibly crossed with binary. Much as the realization that non linear equations weren't junk, but much better described as 3 dimensional entities. Is it possible that simultaneous data running on multiple channels in an analog computer might experience resonance that could be a source of insight? Clock speeds and amplitudes could result in intersections of data that might prove profound. How one would configure a system like this is beyond me, but I have to wonder if the answer of ultimate machine learning can be found solely in a binary state. I don't believe there is a hard limit to machine learning. Humans have barely scratched the surface of the AI field, yet from my age perspective the advances are staggering. So. Carry on. All is as it should be.
Digital logic is so fast because it only has two states. Analog has a theoretical infinite number of states though due to actual accuracy it has more to do with settling time. And this is a great deal slower. Whether the increased accuracy of a smaller number of "bits" is worth it hasn't been worth it up to this point but I will have to think about that. I know a couple of REAL analog engineers that are more than worth their salt. National awards and all.
 
Digital logic is so fast because it only has two states. Analog has a theoretical infinite number of states though due to actual accuracy it has more to do with settling time. And this is a great deal slower. Whether the increased accuracy of a smaller number of "bits" is worth it hasn't been worth it up to this point but I will have to think about that. I know a couple of REAL analog engineers that are more than worth their salt. National awards and all.
OK, went to the people who know the real stuff and it seems like the settling time to get an accuracy to one part in one thousand would be 100 times that maximum speed of the circuits. This sort of kills that idea since it would make about a millisecond to get accuracy in a common op-amp. Now you can push the envelope with his speed designed op-amps etc. But digital circuitry could do this in a hundredth of that time.

Shrinking the circuitry could speed it up or slow it down depending on many things - the less power the larger the effects of stray capacitance and conductor resistance etc.

I'm far more of a digital designer and after thinking about quantum computing I have to learn more about it. But I think that I have an idea for making a simple cell.
 
32
25
Say, an ML algorithm finds that low socioeconomic status is associated with diabetes with a significant p value. We clearly know that diabetes is a biological phenomena and that any possible(this is a big if) causal connection between a non biological variable such as low SES and diabetes must logically have intermediate steps between the two variables within the causal chain. It is these unknown intermediate steps that probably should be investigated in follow up studies. We logically know(or intuit from prior knowledge+domain knowledge) that low SES could lead to higher stress or unhealthy diet, which are biological. So a significant pval for SES indicates that maybe we should collect data on those missing variables, and then redo the analysis with those in the model.

But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?
A ML algorithm with advanced language processing capabilities could learn about diabetes from existing literature and find out the confounder variables that would need to be accounted for. Artificial intelligence can theoretically be superior to the human brain in everything, and I'm pretty sure that it's just a matter of time until that is achieved.
 
27,488
4,010
Artificial intelligence can theoretically be superior to the human brain in everything,
Please cite the professional scientific reference that supports this claim. It seems improbable to me.
 

Want to reply to this thread?

"Limits of Machine Learning?" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top