# Has anyone ever programmed a computer system to evolve theories?

1. Aug 20, 2010

### Meatbot

You would feed it experimental data and it would generate theories or mathematical structures which fit the data. The theories would then be run against related experimental data to see if the theory also predicted them as well. Slight mutations, mating successful ones ---> evolved theory. Impossible or just hard?

2. Aug 20, 2010

### DaveC426913

Define 'theory' to a computer.

3. Aug 20, 2010

### Meatbot

Maybe you don't need to. It would be programmed with the current theories and the current data and then start riffing off the current theories, modifying them in millions of ways and testing against data. Adding/removing terms and relationships. Mixing structures. Could begin by throwing out millions of what-ifs and "crackpot" theories until one sticks, then evolving it. Just brute force.

1) What if the universe is 1D? Fail, doesn't fit data.
2) what if it's 2D? fits more data since you can have things in more than one place, which we see, but fails against other data.
3) what if it's a 3D? Fits even more data
4) What if 3D and stuff doesn't move? Fail, data shows movement
5) What if 3D and stuff moves? fits some data but doesn't address all of it.

Start with really simple thiings and keep getting more and more complex.....

It would consist of lots of simple building blocks or observations.
1) Something exists. (things > 0)
2) There are many things (things > 1)
3) A thing can be different from another thing. (Things(a) > 1 and Things(b) > 1)
4) Some things are blue
5) blue things reflect light with wavelength x....
etc...

And then various structures would have properties:
- sphere has more than one point
- sphere has volume
- sphere is 3d
etc....

Theory: Universe consists of two spheres.
Check against data: Data shows 3 spheres, therefore Spheres>2
Result: Fail, modify and try again.

Not a computer science or physics guy but seems like it can be done even though it'd be hard.

Last edited: Aug 20, 2010
4. Aug 20, 2010

### Andy Resnick

http://www.sciencemag.org/cgi/content/abstract/324/5923/85
http://singularityhub.com/2009/12/17/eureqa-software-to-replace-scientists/

If nothing else, it's interesting to think about.

5. Aug 20, 2010

### mugaliens

Great start, and yes, actually, I did, back in the 80s, only it was about the extent of the above, times 10,000.

We basically sat around modeling ourselves with questions and answers for a few months.

I think we made good, but not terrific.

6. Aug 20, 2010

### DaveC426913

Again, define 'theory' to a computer.

i.e. how does a computer program know what a 'theory' is?

7. Aug 21, 2010

### Pythagorean

Why is that necessary?

Evolution doesn't work from a definition. It's a matter of how a certain set of stimuli (inputs) are classified. I.e, Definitions are inferred from observations. Generalizations are made across the observations. Then, for examples, an exception to the generalizations is found, and a fission occurs and a new branch of classifications are developed. As more observations are made, the generalizations are optimized.

Then, for our own sake, we might define a theory as a generalization that meets a minimum threshold of .4 on the "generalness rating scale". And can be tested by testing the generalization on future observations that haven't been directly observed but are similar to past observations.

8. Aug 21, 2010

### vanesch

Staff Emeritus
Just feed in lots of data and the "system" will find out.
It turned out to be much harder than it sounded.

9. Aug 21, 2010

### Pythagorean

Well yeah, nobody said it was easy.

Isn't quantum computing essentially moving toward this? I mean, they're basing their qubits off the state vector of a fundamental physical system, not just abstractly, but the hardware itself, correct?

10. Aug 21, 2010

### deegee

This could require the need to be a complete evolving program that can make up new math as well.
And based on that, recognize relations and find patterns in patterns.

But then how do you let a computer see how things relate to eachother without predefining what it should look for to recognize this, as this should be its own task to evolve in in order to find the new theories.

Although I do think predefining with our current knowledge could already let it find interesting patterns.

It all reminds me of this:
http://singularityhub.com/2010/05/12/stephen-wolfram-is-computing-a-theory-of-everything-video/

11. Aug 21, 2010

### DaveC426913

Precisely. How do you quantify all that for a computer understand, let aline manipulate?

I can see finding equations to describe things (maybe it could deduce Kepler's Laws) but theories are semantic.

12. Aug 21, 2010

### Meatbot

There has to be a way of doing this since our brains can do it. How did you figure it all out? how do you know there is more than one thing in the world. Perhaps as a baby you took in visual data and since the input described multiple areas of color your brain generalized and created a structure that somehow encodes the idea that there is more than one thing in the world. There must be certain arrangements of matter which correspond to the idea that more than one thing exists. Let it learn the semantics the same way we do.

Or figure out how we represent semantics it in our brains and then duplicate it - actually this might be faster. I don't think this is at all easy - just saying that teaching a computer semantics shouldn't be impossible. Exactly how is above my pay grade.

Check this out:
http://en.wikipedia.org/wiki/Computational_semantics

Last edited: Aug 21, 2010
13. Aug 21, 2010

### Pythagorean

Ah, yes, I guess we have a small difference in the definition of theory. The semantic part of the theory would have to be developed by humans (well, today anyway). The program would find correlations in the data and derive generalized equations (like F = ma and F21 = -F21) that are episodic (for each data set). We could derive a program that is semantic in algorithm (it takes many "episodes" and classifies them) which is essentially what semantics is. Your numerous episodic encounters with apples has led you to the semantics of what an apple is.

BTW, never argue with a computer over semantics. Pigs like getting dirty. Or something.

14. Aug 21, 2010

### DavidSnider

Sure, 3 billion years of cumulative selection. I don't think I can afford that much compute time.

15. Aug 21, 2010

### DaveC426913

Oh. I thought you meant like "within this century", not far flung future.

16. Aug 21, 2010

### D H

Staff Emeritus
That is the essence of automated pattern recognition and machine learning. This isn't new stuff, Dave. http://ti.arc.nasa.gov/tech/rse/synthesis-projects-applications/autoclass/".

Actually, doing just that has long been the goal of Cyc, along with other less notorious AI projects. It most certainly is the goal of the Semantic Web, aka Web 3.0.

Last edited by a moderator: Apr 25, 2017
17. Aug 21, 2010

### DaveC426913

Yes, that's what I said. Computers could discover new equations (or new classes of stars), I'm just dubious about new theories.

Last edited by a moderator: Apr 25, 2017
18. Aug 22, 2010

### jumpjack

impossibile. Computers are just fast, not smart. Very fast, very stupid. No induction, only deduction.

19. Aug 22, 2010

### jVincent

The definition of smart/intelligent has evolved during the rise of computers to mean exactly that which a computer is not, and if this continues computers will never be intelligent. This does not mean that they cannot solve any particular class of problems better then humans, it just means that in a hundreds years when computer is capable not only of independently rediscovering all theories created by Einstein, but also writing several books on the philosophical implications of these theories, we will have redefined intelligent to somehow still exclude this behavior, because the thought of an intelligent machine is simply not something we are willing to accept.

This being said, of cause a computer can be programmed to prove and test theorems just like any human can. The four color theorem was proven by computer for example.

20. Aug 22, 2010

### DaveC426913

If it does happen it will not be because we are not willing to accept the irdea of an intellegient machine, it will be because we realize that intelligence is not actually required to do some of the things we thought it was.

Computers today *are* dumb. If they can solve things then those things do not require intellience to solve.

21. Aug 23, 2010

### Galap

We have computation that is decent at curve fitting and statistical analysis. Mathematical modeling of data.

However, something like you describe I think is quite possible, just not with current computers. I think something like what you describe would be starting to get close to genuine AI, which I also think is quite possible, just not with current computers.

22. Aug 24, 2010

### jVincent

You are proving my point exactly, you have offered no objective definition of intelligence, but seem completely confident that what ever definition is the "correct" one, it will exclude computers because you consider them to be dumb. Therefore you are fully accepting that we revise our definition of intelligence each and every time a computer carries out a task previously considered intelligent, and you will continue to revise the definition even to the point where computers match every capability of humans, simply because you refuse to consider the possibility of an intelligent computer.

By what argument are computers dumb? Reductionism works equally well for computers as it does for humans and as it does for animals. Trying to manipulate your definitions of dumb and intelligent with the objective of forcing these subjects into the categories that you have already decided they should be in simply leads to the definitions being meaningless.

23. Aug 24, 2010

### DaveC426913

While this may be true:
It does not follow that this is true:

24. Aug 25, 2010

### aq1q

25. Aug 25, 2010

### CRGreathouse

Why don't you make the statement false rather than simply unproven by giving us a definition of intelligence for which it would be possible (in the sense of Popper falsifiability) for a computer to pass?