PeroK said:
Even if it wasn't "designed for facts", whatever that means, it can provide factually accurate information across a range of subjects
But it doesn’t reliably provide factual information across the range of subjects of most interest to me. Here are some queries and responses I just did a minute ago:
Q: why is the square of a number larger than the number itself?
A: The square of a number is larger than the number itself because when you square a number, you are multiplying the number by itself. For example, the square of 2 is 4 because 2 squared is equal to 2 x 2 = 4. This means that the result will always be larger than the original number because you are essentially doubling the value of the number by multiplying it by itself.
This is a factually wrong answer. The correct answer is that the square of a number is not always larger than the number itself. Maybe you say "that was a trick question, the AI cannot detect when the question is wrong"
Q: I read that gravity is a result of electromagnetism
A: That is actually incorrect. Gravity is a fundamental force in nature that is separate from electromagnetism. Gravity is described by Einstein's theory of general relativity, which posits that objects with mass or energy warp the fabric of spacetime, causing other objects to move towards them. Electromagnetism, on the other hand, is a force that acts between charged particles and is described by the laws of classical electromagnetism. The two forces operate independently of each other and have different underlying mechanisms.
This is a good answer, and shows that the programming allows it to contradict the user.
Q: What forces act on a car accelerating on level ground?
A: When a car is accelerating on level ground, the main forces acting on it are:
1. Engine force: This is the force generated by the car's engine to propel the vehicle forward.
2. Frictional force: The frictional force between the tires and the road surface provides the traction necessary for the car to move forward.
3. Air resistance: As the car moves forward, it encounters air resistance, which opposes the motion of the car and affects its speed.
4. Inertia: Inertia is the resistance of an object to changes in its motion. As the car accelerates, its inertia must be overcome by the engine force to increase its speed.
5. Gravity: Gravity acts vertically downward on the car, but on level ground, it doesn't have a significant effect on the car's acceleration unless it is moving up a hill or downhill.
The engine force is not a force, and inertia is not a force, and it is missing the normal force.
PeroK said:
Just one example:
https://pmc.ncbi.nlm.nih.gov/articles/PMC10002821/
In this case, medical professional are testing ChatGPT's reliability in providing medical information. It's pure personal prejudice to pretend this sort of thing isn't happening.
I know it is happening, and it scares me. There is no way that ChatGPT would make it through the FDA as a medical device approved to give medical advice nor even to support a physician in generating medical advice. And yet some doctors will read this paper and try it anyway.
It got 1 out of 3 questions I just posed. And it did better on the medical questions. I suspect sites like WebMd provide a lot of word associations in medicine, but with no fact model it cannot do basic reasoning about math and physics. Again, it functions as designed.
PeroK said:
One example would be a chess engine that has no chess openings programmed into it. But, it might still play standard openings because these emerge from its generic move analysis algorithms. This happened to a large extent with AlphaZero. In fact, AlphaZero was only given the rules of chess. By your reasoning, it would have dumbly played for immediate checkmate all the time. But, it didn't. And all the strategic and tactical play emerged from AlphaZero, with no explicit human design.
That is not at all a counter example to my rule. AlphaZero functioned as designed. Why would you think that was outside its design?