How much could an A.I. learn about physics

AI Thread Summary
The discussion revolves around the capabilities and limitations of a highly intelligent AI confined to a small room with basic human-like senses and no external information sources. The AI could potentially learn macroscopic laws governing interactions within its environment, driven by curiosity, but would struggle to understand concepts beyond its immediate experience, such as gravity's cause or the existence of an "outside." Its learning would be heavily influenced by its initial programming and the extent of its base knowledge. Unlike humans, who develop through instinct and education, the AI's ability to explore and learn would be restricted by its pre-set functions and programmed responses. The conversation highlights that while the AI might simulate various scenarios and derive some fundamental principles, it would lack the context to determine its actual universe. The limitations of binary programming and the necessity for ongoing input to enhance learning capabilities are also emphasized, suggesting that a closed environment would hinder the AI's potential for growth and understanding.
Razorback-PT
Messages
18
Reaction score
1
...with very limited ability to do any experiments.

Imagine there's a super smart A.I. that has perfect logic. It has no access to the internet or any science books. Let's say it has a robot body with the same basic senses humans have: sight ,sound, touch. It also has legs arms and hands to manipulate objects. It's locked in a small room with some simple objects like maybe a table, some chairs and a lamp to illuminate the room.

What do you think are the limits of what it could learn about the world?
 
  • Like
Likes Lren Zvsm
Physics news on Phys.org
It could, but might not, learn all macroscopic laws that deal with whatever interactions can be instituted among the objects including itself. It probably would, since curiosity is a prime ingredient of intelligence. As for anything that it can't detect, I doubt that it could extrapolate to that. It would know, for instance, that gravity exists and how it performs in that room, but not what causes it or how anything outside experiences it. In fact, it wouldn't know that there is an "outside". Old Testament-era humans, and for centuries beyond, believed that we lived on the inside of a crystal bowl turned upside-down with nothing beyond. There's no reason to expect a machine to investigate further.
 
Could it discern anything at all about electromagnetism from examining the light in the room? Could it study it's own thoughts to understand what sort of brain it has and how it works. Maybe it could derive quantum mechanics from understanding how silicon chips work.
 
I very much doubt it. For one thing, trying to examine itself down to the chip level would result in its immediate cessation of function.
Really, the only way to know is to build one and wait to see what happens.
 
  • Like
Likes Lren Zvsm
AI needs a starting point programmed into it so depending on what the programer set as the AI's level of knowledge would greatly affect in which direction it started to inquire about its surroundings. unlike a child which is an instinctive creature built up over milenia which will first explore its ability to interact (see,smell,touch,taste,hear...etc...)those actions in an AI would already be programed to perform at set values which in turn limits some potential even if the AI has a broader spectrum of sight how it uses it will be limited to the program that makes it function where as a child may develop outside the expected eyesight (near sighted,far sighted,color blind, exceptionally good eyesight...etc...) which will greatly affect its perception of the world around it.
humans needed education to come up with all our advanced specialties. removing education from the AI would be most likely to stunt its ability to advance beyond the most remedial advances in that direction.

you mention super smart but don't elaborate. smart can mean many things. which direction of knowledge was it pointed in? if you program the AI to be extra inquisitive with a small data bank of base knowledge you'd get vastly different results from a huge data bank of base knowledge with on;y a passing interest in learning.
 
Last edited:
  • Like
Likes Danger
You're right. This exercise is completely dependent on the details of how the senses of the robot work, and what software it already has pre-installed.

His senses are human like.
When I say super smart I imagine something along these lines: His base knowledge is a sense of self, and his language is mathematics. It's function is to gather all information it can. All sorts of information. All possible relations between the objects in the room.
It can use that information to create simulations. Extrapolate trillions of different possible universes that would match the rules of the room he is in.

My opinion is that the AI would have no evidence to know which universe he was in, but inside his mind, one of the simulations would be bound to be pretty close.

By pretty close I mean, maybe he solved physics from the top-down. Maybe the few axioms it had evidence of where enough to derive the ultimate theory of everything. But from there he can't know which of the infinite possible universes he is actually inhabiting.
 
"sense of self" is giving the AI the definition of life. not sure that can be quantified as a level of knowledge as much as its a way of interpreting knowledge.
all programs are a form of mathematics the use of binary would be a limiting factor since it boils down to "on or off" and lacks the middle ground to some extent. as I understand artificial intelligence as I've seen it applied: the program changes its responses to data it collects as the AI collects more data making more informed choices with the increased possibilities of that data. IE: the price of a service taking into account the cost of materials, delivery,cost of labor,distances to the site,wear and tare on tools,admin costs, overhead...etc... as the AI is given more examples of the costs the more accurately it can become at estimating. this only describes a rudimentary level of AI but it stands as an example of growing ability which is dependent on new input to further progress. your experiment of a closed room with limited stimuli is likely to have very poor results because you're not enabling the AI to grow. The question makes me think of asking if Schrodinger's cat can learn physics in the box?.
 
Back
Top