An elementary analysis problem that I would like a hint on.

jdinatale
Messages
153
Reaction score
0
I could probably find the answer to this problem easily by a quick google search, but I don't want to spoil it. Instead, could someone give me a hint in the right direction?



rational_zpse301832c.png


Ok, so it seems to me like a contradiction would work here. It seems like directly proving the existence of an irrational between two arbitrary real numbers would be impossible.

Assuming that EVERY number between two arbitrary numbers is rational seems like a good ground for a contradiction. Now I thought of two things from here,

Consider that a < (a + b)/2 < b and show (a + b)/2 is irrational. But that won't work because a and b could be rational.

Next, I thought to consider the geometric mean a < \sqrt{ab} < b and show that \sqrt{ab} is irrational. But this is a problem because if one of a or b is negative, the geometric mean does not exist.

Now I only have access to bare bones tools like the fact that the real numbers is a field, the axiom of completeness, and the Archimedean Principle.

The Archimedean Principle appears useful, but I'm not sure how to cook up an irrational number using it.
 
Physics news on Phys.org
##\sqrt{ab}## is still a good place to start. Just use ##\pm\sqrt{|a|\,|b|}## instead, with the sign selected appropriately. Now the only problems are with a=0 or b=0, and those are easy special cases.
 
Technically, you need to show that the square root exists.

Also, what if ##a=1## and ##b=4##, for example?
 
What happens if you multiply every element of the rationals by a fixed irrational? What can you say about the resulting set?
 
Did you prove that the rationals are dense in the reals yet? If not, you might want to do that first...
 
Another approach would be to use a cardinality argument. This assumes you have already established that ##\mathbb{Q}## is countable and ##\mathbb{R}## is uncountable. Given these facts, what can you say about the cardinality of the interval ##(a,b)##, and what can you conclude from that?
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top