Has anyone ever programmed a computer system to evolve theories?

AI Thread Summary
The discussion revolves around the potential for computers to generate and evolve scientific theories based on experimental data. The concept involves feeding data into a system that can create mathematical structures and theories, which are then tested against additional data to assess their validity. This iterative process would involve slight modifications to theories, akin to evolutionary algorithms, where successful theories are refined over time. Key points include the challenge of defining what constitutes a 'theory' for a computer, as well as the need for the system to recognize relationships and patterns within data. The discussion highlights the possibility of starting with simple observations and gradually increasing complexity, allowing the computer to explore various theoretical constructs. There are considerations about the limitations of current technology, with suggestions that future advancements, possibly involving quantum computing or more sophisticated neural networks, could enable computers to develop imaginative solutions and new mathematical frameworks. The conversation also touches on the philosophical implications of machine intelligence and the evolving definitions of intelligence as computers become more capable of tasks traditionally thought to require human-like reasoning.
Meatbot
Messages
146
Reaction score
1
You would feed it experimental data and it would generate theories or mathematical structures which fit the data. The theories would then be run against related experimental data to see if the theory also predicted them as well. Slight mutations, mating successful ones ---> evolved theory. Impossible or just hard?
 
Computer science news on Phys.org
Define 'theory' to a computer.
 
DaveC426913 said:
Define 'theory' to a computer.

Maybe you don't need to. It would be programmed with the current theories and the current data and then start riffing off the current theories, modifying them in millions of ways and testing against data. Adding/removing terms and relationships. Mixing structures. Could begin by throwing out millions of what-ifs and "crackpot" theories until one sticks, then evolving it. Just brute force.

1) What if the universe is 1D? Fail, doesn't fit data.
2) what if it's 2D? fits more data since you can have things in more than one place, which we see, but fails against other data.
3) what if it's a 3D? Fits even more data
4) What if 3D and stuff doesn't move? Fail, data shows movement
5) What if 3D and stuff moves? fits some data but doesn't address all of it.

Start with really simple thiings and keep getting more and more complex...

It would consist of lots of simple building blocks or observations.
1) Something exists. (things > 0)
2) There are many things (things > 1)
3) A thing can be different from another thing. (Things(a) > 1 and Things(b) > 1)
4) Some things are blue
5) blue things reflect light with wavelength x...
etc...

And then various structures would have properties:
- sphere has more than one point
- sphere has volume
- sphere is 3d
etc...

Theory: Universe consists of two spheres.
Check against data: Data shows 3 spheres, therefore Spheres>2
Result: Fail, modify and try again.

Not a computer science or physics guy but seems like it can be done even though it'd be hard.
 
Last edited:
Meatbot said:
You would feed it experimental data and it would generate theories or mathematical structures which fit the data. The theories would then be run against related experimental data to see if the theory also predicted them as well. Slight mutations, mating successful ones ---> evolved theory. Impossible or just hard?

http://www.sciencemag.org/cgi/content/abstract/324/5923/85
http://singularityhub.com/2009/12/17/eureqa-software-to-replace-scientists/

If nothing else, it's interesting to think about.
 
Meatbot said:
Maybe you don't need to. It would be programmed with the current theories and the current data and then start riffing off the current theories, modifying them in millions of ways and testing against data. Adding/removing terms and relationships. Mixing structures. Could begin by throwing out millions of what-ifs and "crackpot" theories until one sticks, then evolving it. Just brute force.

1) What if the universe is 1D? Fail, doesn't fit data.
2) what if it's 2D? fits more data since you can have things in more than one place, which we see, but fails against other data.
3) what if it's a 3D? Fits even more data
4) What if 3D and stuff doesn't move? Fail, data shows movement
5) What if 3D and stuff moves? fits some data but doesn't address all of it.

Start with really simple thiings and keep getting more and more complex...

It would consist of lots of simple building blocks or observations.
1) Something exists. (things > 0)
2) There are many things (things > 1)
3) A thing can be different from another thing. (Things(a) > 1 and Things(b) > 1)
4) Some things are blue
5) blue things reflect light with wavelength x...
etc...

And then various structures would have properties:
- sphere has more than one point
- sphere has volume
- sphere is 3d
etc...

Theory: Universe consists of two spheres.
Check against data: Data shows 3 spheres, therefore Spheres>2
Result: Fail, modify and try again.

Not a computer science or physics guy but seems like it can be done even though it'd be hard.

Great start, and yes, actually, I did, back in the 80s, only it was about the extent of the above, times 10,000.

We basically sat around modeling ourselves with questions and answers for a few months.

I think we made good, but not terrific.
 
Meatbot said:
It would be programmed with the current theories...

Again, define 'theory' to a computer.

i.e. how does a computer program know what a 'theory' is?
 
DaveC426913 said:
Again, define 'theory' to a computer.

i.e. how does a computer program know what a 'theory' is?

Why is that necessary?

Evolution doesn't work from a definition. It's a matter of how a certain set of stimuli (inputs) are classified. I.e, Definitions are inferred from observations. Generalizations are made across the observations. Then, for examples, an exception to the generalizations is found, and a fission occurs and a new branch of classifications are developed. As more observations are made, the generalizations are optimized.

Then, for our own sake, we might define a theory as a generalization that meets a minimum threshold of .4 on the "generalness rating scale". And can be tested by testing the generalization on future observations that haven't been directly observed but are similar to past observations.
 
This looks like the fantasies people had a few decades back about neural networks...
Just feed in lots of data and the "system" will find out.
It turned out to be much harder than it sounded.
 
vanesch said:
This looks like the fantasies people had a few decades back about neural networks...
Just feed in lots of data and the "system" will find out.
It turned out to be much harder than it sounded.

Well yeah, nobody said it was easy.

Isn't quantum computing essentially moving toward this? I mean, they're basing their qubits off the state vector of a fundamental physical system, not just abstractly, but the hardware itself, correct?
 
  • #10
This could require the need to be a complete evolving program that can make up new math as well.
And based on that, recognize relations and find patterns in patterns.

But then how do you let a computer see how things relate to each other without predefining what it should look for to recognize this, as this should be its own task to evolve in in order to find the new theories.

Although I do think predefining with our current knowledge could already let it find interesting patterns.

It all reminds me of this:
http://singularityhub.com/2010/05/12/stephen-wolfram-is-computing-a-theory-of-everything-video/
 
  • #11
Pythagorean said:
Why is that necessary?

Evolution doesn't work from a definition. It's a matter of how a certain set of stimuli (inputs) are classified. I.e, Definitions are inferred from observations. Generalizations are made across the observations. Then, for examples, an exception to the generalizations is found, and a fission occurs and a new branch of classifications are developed. As more observations are made, the generalizations are optimized.

Then, for our own sake, we might define a theory as a generalization that meets a minimum threshold of .4 on the "generalness rating scale". And can be tested by testing the generalization on future observations that haven't been directly observed but are similar to past observations.

Precisely. How do you quantify all that for a computer understand, let aline manipulate?

I can see finding equations to describe things (maybe it could deduce Kepler's Laws) but theories are semantic.
 
  • #12
DaveC426913 said:
Precisely. How do you quantify all that for a computer understand, let aline manipulate?

I can see finding equations to describe things (maybe it could deduce Kepler's Laws) but theories are semantic.

There has to be a way of doing this since our brains can do it. How did you figure it all out? how do you know there is more than one thing in the world. Perhaps as a baby you took in visual data and since the input described multiple areas of color your brain generalized and created a structure that somehow encodes the idea that there is more than one thing in the world. There must be certain arrangements of matter which correspond to the idea that more than one thing exists. Let it learn the semantics the same way we do.

Or figure out how we represent semantics it in our brains and then duplicate it - actually this might be faster. I don't think this is at all easy - just saying that teaching a computer semantics shouldn't be impossible. Exactly how is above my pay grade.

Check this out:
http://en.wikipedia.org/wiki/Computational_semantics
 
Last edited:
  • #13
DaveC426913 said:
Precisely. How do you quantify all that for a computer understand, let aline manipulate?

I can see finding equations to describe things (maybe it could deduce Kepler's Laws) but theories are semantic.

Ah, yes, I guess we have a small difference in the definition of theory. The semantic part of the theory would have to be developed by humans (well, today anyway). The program would find correlations in the data and derive generalized equations (like F = ma and F21 = -F21) that are episodic (for each data set). We could derive a program that is semantic in algorithm (it takes many "episodes" and classifies them) which is essentially what semantics is. Your numerous episodic encounters with apples has led you to the semantics of what an apple is.

BTW, never argue with a computer over semantics. Pigs like getting dirty. Or something.
 
  • #14
Meatbot said:
There has to be a way of doing this since our brains can do it.

Sure, 3 billion years of cumulative selection. I don't think I can afford that much compute time.
 
  • #15
Meatbot said:
There has to be a way of doing this since our brains can do it. How did you figure it all out?

Oh. I thought you meant like "within this century", not far flung future.
 
  • #16
DaveC426913 said:
Precisely. How do you quantify all that for a computer understand, let aline manipulate?
That is the essence of automated pattern recognition and machine learning. This isn't new stuff, Dave. http://ti.arc.nasa.gov/tech/rse/synthesis-projects-applications/autoclass/".

I can see finding equations to describe things (maybe it could deduce Kepler's Laws) but theories are semantic.
Actually, doing just that has long been the goal of Cyc, along with other less notorious AI projects. It most certainly is the goal of the Semantic Web, aka Web 3.0.
 
Last edited by a moderator:
  • #17
D H said:
That is the essence of automated pattern recognition and machine learning. This isn't new stuff, Dave. http://ti.arc.nasa.gov/tech/rse/synthesis-projects-applications/autoclass/" discovered a new class of infrared stars back in the early 1990s. Work on automated discovery has continued since the heady AI days of the 1970s-1990s. Closer to the topic at hand, Eureqa has (re)discovered several laws of physics without human intervention.

Yes, that's what I said. Computers could discover new equations (or new classes of stars), I'm just dubious about new theories.
 
Last edited by a moderator:
  • #18
Meatbot said:
You would feed it experimental data and it would generate theories or mathematical structures which fit the data. The theories would then be run against related experimental data to see if the theory also predicted them as well. Slight mutations, mating successful ones ---> evolved theory. Impossible or just hard?
impossibile. Computers are just fast, not smart. Very fast, very stupid. No induction, only deduction.
 
  • #19
jumpjack said:
impossibile. Computers are just fast, not smart. Very fast, very stupid. No induction, only deduction.

The definition of smart/intelligent has evolved during the rise of computers to mean exactly that which a computer is not, and if this continues computers will never be intelligent. This does not mean that they cannot solve any particular class of problems better then humans, it just means that in a hundreds years when computer is capable not only of independently rediscovering all theories created by Einstein, but also writing several books on the philosophical implications of these theories, we will have redefined intelligent to somehow still exclude this behavior, because the thought of an intelligent machine is simply not something we are willing to accept.

This being said, of cause a computer can be programmed to prove and test theorems just like any human can. The four color theorem was proven by computer for example.
 
  • #20
jVincent said:
...because the thought of an intelligent machine is simply not something we are willing to accept.

If it does happen it will not be because we are not willing to accept the irdea of an intellegient machine, it will be because we realize that intelligence is not actually required to do some of the things we thought it was.

Computers today *are* dumb. If they can solve things then those things do not require intellience to solve.
 
  • #21
We have computation that is decent at curve fitting and statistical analysis. Mathematical modeling of data.

However, something like you describe I think is quite possible, just not with current computers. I think something like what you describe would be starting to get close to genuine AI, which I also think is quite possible, just not with current computers.
 
  • #22
DaveC426913 said:
If it does happen it will not be because we are not willing to accept the irdea of an intellegient machine, it will be because we realize that intelligence is not actually required to do some of the things we thought it was.

Computers today *are* dumb. If they can solve things then those things do not require intellience to solve.

You are proving my point exactly, you have offered no objective definition of intelligence, but seem completely confident that what ever definition is the "correct" one, it will exclude computers because you consider them to be dumb. Therefore you are fully accepting that we revise our definition of intelligence each and every time a computer carries out a task previously considered intelligent, and you will continue to revise the definition even to the point where computers match every capability of humans, simply because you refuse to consider the possibility of an intelligent computer.


By what argument are computers dumb? Reductionism works equally well for computers as it does for humans and as it does for animals. Trying to manipulate your definitions of dumb and intelligent with the objective of forcing these subjects into the categories that you have already decided they should be in simply leads to the definitions being meaningless.
 
  • #23
While this may be true:
jVincent said:
... seem completely confident that what ever definition is the "correct" one, it will exclude computers because you consider them to be dumb.

It does not follow that this is true:
jVincent said:
Therefore you are fully accepting that we revise our definition of intelligence each and every time a computer carries out a task previously considered intelligent, and you will continue to revise the definition even to the point where computers match every capability of humans, simply because you refuse to consider the possibility of an intelligent computer.
 
  • #25
DaveC426913 said:
While this may be true:


It does not follow that this is true:

Why don't you make the statement false rather than simply unproven by giving us a definition of intelligence for which it would be possible (in the sense of Popper falsifiability) for a computer to pass?
 
  • #26
CRGreathouse said:
Why don't you make the statement false rather than simply unproven by giving us a definition of intelligence for which it would be possible (in the sense of Popper falsifiability) for a computer to pass?
The onus is not on me to bolster any claims. jVincent is making an unfounded claim based on faulty logic about what I/we might decide in the future. Why would I need to address that? All I need do is knock the pins out from under the claim.
 
  • #27
DaveC426913 said:
The onus is not on me to bolster any claims. jVincent is making an unfounded claim based on faulty logic about what I/we might decide in the future. Why would I need to address that? All I need do is knock the pins out from under the claim.

I didn't say you had to, I invited you to enlighten us.
 
  • #28
like others have mentioned, a lot of this stuff is already being done. for small refinements on linear systems we use kalman filters. for less well-known stuff, system identification like what lennart ljung writes about. even then, you're dealing with mostly linear systems.

it's one thing to run cases based on mathematics that is well-known. but what if the thing you're studying doesn't behave according to the known models? maybe you've got to invent a new type of math to solve the problem. and even if you could do that, what are the chances that you get back something so complex that it is beyond your comprehension?
 
  • #29
Proton Soup said:
it's one thing to run cases based on mathematics that is well-known. but what if the thing you're studying doesn't behave according to the known models?
That's the interesting part to me. Many of the interesting problem may be solved using ideas we haven't even thought of. It has to be able to invent or evolve new models and new math. What if you have it throw out random groupings of axioms and structures and have it analyze the consequences of each?

Proton Soup said:
maybe you've got to invent a new type of math to solve the problem. and even if you could do that, what are the chances that you get back something so complex that it is beyond your comprehension?
Quite possible. In fact I read about a group that evolved electronic circuits to perform certain tasks. It turned out the the evolved versions worked much better than the conventional solutions but the scientists had no clue why. The evolved circuits were strange and counterintuitive but worked.

What about evolving ways for the computer to explain it that we will understand? How about a distributed computing project where the computer generates english language explanations and then people grade them as to how understandable they are. Then the system evolves the explanations themselves. Repeat until grokked. It's like a tutor...you don't get it this way? Well, look at it this way...no? how about this...etc... Start with simple things, then the system eventually gets an idea of what humans find understandable and what we don't. It can apply this knowledge to more complex explanations.

Also, it might be good to develop a taxonomy of the different types of relationships/interactions that 2 entities can possibly have. These would be used in generating theories and explanations. Things like this:
- a close to b
- a blocks b
- a accelerates b
- a similar to b
- a and b caused by c, a and b cause c
- a causes b, b causes a
- a same as b
- a prohibits b
- a enhances b
- a inhibits b
- a touches b
- a part of b
- a comes from b
- a is b
- a is not b
etc...

And you can stack them: (a enhances (b comes from (c prohibits d))) is similar to e

And that's hard to understand, so computer gives you this:
c prohibits d, thereby creating b, which is then enhanced by a. The enhanced b is simliar to e.

and then the explanation is further evolved:

c stops d and that makes b. Then a makes b stronger. Now b is like e.


ALSO...now you can plug your data into these things and get starter theories. What if data is:
1) the existence of light
2) the existence of protons

So you could have these theories:
- light comes from protons
- light blocks protons
- (light enhances (protons cause light))
- light is protons
etc.

The it would have to look at the implications of each of these. That's harder.
 
Last edited:
  • #30
To create new theories computer would have to think imaginatly, which goes against pretty much everything of what a computer is. A computer follows a fixed alogrithm, and although it can modify this alogrithm, a computer could never spit out something like string theory, because that requires imaginative thinking.
 
  • #31
gk007 said:
To create new theories computer would have to think imaginatly, which goes against pretty much everything of what a computer is. A computer follows a fixed alogrithm, and although it can modify this alogrithm, a computer could never spit out something like string theory, because that requires imaginative thinking.

Nah...If your brain can do it, so can a computer. Just an engineering problem at this point.
 
  • #32
Meatbot is correct. Imagination is simply a feature of a very complex system. Have a computer complex enough and it'll be imaginative.
 
  • #33
You would have to create a neural network with as many neurons as a human brain, which is several billion, and each of those neurons would have thousands of connections, but I suppose that is just an engineering problem...
 
  • #34
i think you could model imagination (to a degree at least) by throwing in some randomization. that's (part of) how the evolved circuits that Meatbot mentioned arrived at their "imaginative" solutions.
 
  • #35
gk007 said:
You would have to create a neural network with as many neurons as a human brain, which is several billion, and each of those neurons would have thousands of connections, but I suppose that is just an engineering problem...
There's also mimicing the subtletly of the stimuli and reinforcement between neurons.
 
  • #36
Proton Soup said:
i think you could model imagination (to a degree at least) by throwing in some randomization. that's (part of) how the evolved circuits that Meatbot mentioned arrived at their "imaginative" solutions.

Modeling imagination seems like it could be done. How about modeling creativity? What if you have it to throw out random cancepts/situations/problems that are at first glance probably unrelated to the problem at hand, and then have it look for similarities between them. It also examines the other attributes of the 2nd item that don't SEEM TO match and considers whether they might really match somehow if you thought about it.

Take a lamp and a desk fan. Both have mass. Both use electricity. Both are made of quarks. Both are plastic. Both are white. Etc... Possibly useful. Ok, now what about a quality of the fan that doesn't seem to be present in the lamp at first glance. A fan makes air move. At first glance, most people would not say a lamp makes air move and would overlook that when listing the qualities of a lamp. But it does make air move by heating it, causing it to rise. A fan also cools people off. So ask if a lamp cools people off. I bet nobody ever asked that question before. Well, I suppose it might. Maybe it makes hot air rise above it, pulling cooler air in the bottom to replace it and creating a cooling air current. Even harder: a fan creates a force that tries to accelerate it. Does a lamp do that? Maybe. Does a lamp have something that spins? Does a fan create light? Maybe doing this kind of thing creates useful insight.

You can do the same thing with cause and effect, with a variable thrown in:
"x causes mass" vs. "removing energy from water causes ice"
So, possible questions (which can be starter theories as well):
- Is mass caused by a modification of something that already exists?
- Does removing energy from something create mass?
- Is mass equivalent to a solid?
- Is there a "liquid" form of mass?

Just throwing stuff out there...a rough sketch.

An interesting related link, the Theory of Inventive Problem Solving. Some of these techniques could be applied: http://www.mazur.net/triz/
 
Last edited:
Back
Top