# Math Challenge - January 2021

• Challenge
• fresh_42
• Featured
In summary: The solution can be seen as a corollary of the theorem about continuous functions on compact sets, but the theorem can be used at the same time to give a direct proof. It is a beautiful theorem and proof.In summary, the conversation covers topics such as linear programming, trigonometry, calculus, PDE, differential matrix equation, function theory, linear algebra, irrationality, group theory, and ring theory. The conversation also includes discussions on solving equations, calculating integrals, proving the existence of certain paths, and determining values of variables in equations. Some of the problems addressed include solving differential equations, showing the irrationality of certain numbers, and proving the existence of certain solutions using analytical and topological tools
fresh_42 said:
5. Let ##A\in \mathbb{M}(n,\mathbb{R})## be a real square matrix. Show that there is a parameterized path ##x\, : \,\mathbb{R}\longrightarrow \mathbb{R}^n## as solution of the differential equation ##\dot x(t)=Ax(t)## which is unique for any given initial condition ##x(t_0)=x_0.##
I didn't notice there was one more problem, so here's my go at it.

Claim. ##\mathbf{x}(t)## is analytic on ##\mathbb{R}##.
Proof. From ##\dot{\mathbf{x}}(t)=A\mathbf{x}(t)##, it immediately follows that ##\mathbf{x}^{(k)}(t)=A^k\,\mathbf{x}(t)##. Introduce the operator norm ##\|A\|=\mathrm{sup}_{\mathbf{x}\in\mathbb{R}^n}\frac{\|A\mathbf{x}\|}{\|\mathbf{x}\|}##, which is a subadditive and submultiplicative norm. It follows that $$e^{(t-t_0)A}\,\mathbf{x}(t_0)=\sum_{k=0}^\infty \frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k$$ exists for all ##t,t_0\in\mathbb{R}##, with convergence of ##e^{(t-t_0)A}## provided by the operator norm, and convergence of the RHS provided by the standard norm on ##\mathbb{R}^n##, respectively (note that ##\|\mathbf{x}^{(k)}(t_0)\|\leq\|A\|^k\|\mathbf{x}(t_0)\|##). One can check that this is a solution of the differential equation ##\dot{\mathbf{x}}(t)=A\,\mathbf{x}(t)##.

We now prove that ##\mathbf{x}(t)=e^{(t-t_0)A}\,\mathbf{x}(t_0)##: to be more precise, $$(\forall t,t_0\in\mathbb{R})\lim_{N\rightarrow\infty}\Bigg(\mathbf{x}(t)-\sum_{k=0}^N\frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k\Bigg)=0.$$ By Taylor's theorem, one has that ##\mathbf{x}(t)=\Big(\sum_{k=0}^N\frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k\Big)+\mathbf{r}_N(t)##, where $$(\exists\xi\in\mathbb{R})\,\,\mathbf{r}_N(t)=\frac{\mathbf{x}^{(N+1)}(\xi)}{(N+1)!}(t-t_0)^{N+1}$$ for some ##\xi## between ##t_0## and ##t##. Since ##\mathbf{x}^{(N+1)}(\xi)=A^{N+1}\,\mathbf{x}(\xi)##, we may take the operator norm to get $$\|\mathbf{r}_N(t)\|\leq\frac{\|A\|^{(N+1)}\|\mathbf{x}(\xi)\|}{(N+1)!}|t-t_0|^{N+1}$$ and from this one clearly sees that ##\lim_{N\rightarrow\infty}\mathbf{r}_N(t)=0##. Thus the series for ##e^{(t-t_0)A}\,\mathbf{x}(t_0)## converges pointwise to ##\mathbf{x}(t)##.

I wonder if Taylor's theorem applies in the infinite-dimensional setting with a bounded operator. If so, I think this result should generalize.

Edit: I just realized you have to modify the formula for the remainder term slightly. To be precise, you may need a different ##\xi## for each component of ##\mathbf{r}_N(t)##, although the theorem still holds.

suremarc said:
I didn't notice there was one more problem, so here's my go at it.

Claim. ##\mathbf{x}(t)## is analytic on ##\mathbb{R}##.
Proof. From ##\dot{\mathbf{x}}(t)=A\mathbf{x}(t)##, it immediately follows that ##\mathbf{x}^{(k)}(t)=A^k\,\mathbf{x}(t)##. Introduce the operator norm ##\|A\|=\mathrm{sup}_{\mathbf{x}\in\mathbb{R}^n}\frac{\|A\mathbf{x}\|}{\|\mathbf{x}\|}##, which is a subadditive and submultiplicative norm. It follows that $$e^{(t-t_0)A}\,\mathbf{x}(t_0)=\sum_{k=0}^\infty \frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k$$ exists for all ##t,t_0\in\mathbb{R}##, with convergence of ##e^{(t-t_0)A}## provided by the operator norm, and convergence of the RHS provided by the standard norm on ##\mathbb{R}^n##, respectively (note that ##\|\mathbf{x}^{(k)}(t_0)\|\leq\|A\|^k\|\mathbf{x}(t_0)\|##). One can check that this is a solution of the differential equation ##\dot{\mathbf{x}}(t)=A\,\mathbf{x}(t)##.

We now prove that ##\mathbf{x}(t)=e^{(t-t_0)A}\,\mathbf{x}(t_0)##: to be more precise, $$(\forall t,t_0\in\mathbb{R})\lim_{N\rightarrow\infty}\Bigg(\mathbf{x}(t)-\sum_{k=0}^N\frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k\Bigg)=0.$$ By Taylor's theorem, one has that ##\mathbf{x}(t)=\Big(\sum_{k=0}^N\frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k\Big)+\mathbf{r}_N(t)##, where $$(\exists\xi\in\mathbb{R})\,\,\mathbf{r}_N(t)=\frac{\mathbf{x}^{(N+1)}(\xi)}{(N+1)!}(t-t_0)^{N+1}$$ for some ##\xi## between ##t_0## and ##t##. Since ##\mathbf{x}^{(N+1)}(\xi)=A^{N+1}\,\mathbf{x}(\xi)##, we may take the operator norm to get $$\|\mathbf{r}_N(t)\|\leq\frac{\|A\|^{(N+1)}\|\mathbf{x}(\xi)\|}{(N+1)!}|t-t_0|^{N+1}$$ and from this one clearly sees that ##\lim_{N\rightarrow\infty}\mathbf{r}_N(t)=0##. Thus the series for ##e^{(t-t_0)A}\,\mathbf{x}(t_0)## converges pointwise to ##\mathbf{x}(t)##.

I wonder if Taylor's theorem applies in the infinite-dimensional setting with a bounded operator. If so, I think this result should generalize.

Edit: I just realized you have to modify the formula for the remainder term slightly. To be precise, you may need a different ##\xi## for each component of ##\mathbf{r}_N(t)##, although the theorem still holds.
I would have been satisfied by a simple differentiation to prove existence and check ##x(t_0)=x_0##, which is trivial, but should have been mentioned, because for any ##x_0## there is a unique flow is the core statement. However, it's nice to have an argument for convergence.

But what about uniqueness? You cannot conclude from the unique convergence of your solution, that there cannot be another solution. The vector field might have branches. Uniqueness is crucial, especially for physicists!

fresh_42 said:
I would have been satisfied by a simple differentiation to prove existence and check ##x(t_0)=x_0##, which is trivial, but should have been mentioned, because for any ##x_0## there is a unique flow is the core statement. However, it's nice to have an argument for convergence.

But what about uniqueness? You cannot conclude from the unique convergence of your solution, that there cannot be another solution. The vector field might have branches. Uniqueness is crucial, especially for physicists!
Doesn’t ##\lim_{N\rightarrow\infty}\mathbf{r}_N(t)=0## imply that any solution to ##\dot{\mathbf{x}}(t)=A\,\mathbf{x}(t)## with ##\mathbf{x}(t_0)=\mathbf{x}_0## is equal to the series? ##\mathbf{r}_N(t)## is just the difference ##\mathbf{x}(t)-\sum_{k=1}^N\frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k.##

suremarc said:
Doesn’t ##\lim_{N\rightarrow\infty}\mathbf{r}_N(t)=0## imply that any solution to ##\dot{\mathbf{x}}(t)=A\,\mathbf{x}(t)## with ##\mathbf{x}(t_0)=\mathbf{x}_0## is equal to the series? ##\mathbf{r}_N(t)## is just the difference ##\mathbf{x}(t)-\sum_{k=1}^N\frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k.##
No. You could have a completely different solution. Differentiation loses constant terms, so maybe this constant term creates a different flow through the vector field. Let ##y(t)## be another solution of the differential equation, i.e. ##\dot y(t)=Ay(t)\, , \,y(t_0)=x_0.## Now why is ##x(t)=y(t)## for ##t>t_0\,?## Something like a repeller might happen at ##x_0##. I guess you would have noticed if you had used ##x_0## as required, and not simply assumed ##x(t_0)=?## in your proof. It is in fact the major part of this theorem, that the initial condition already determines the entire flow!

fresh_42 said:
No. You could have a completely different solution. Differentiation loses constant terms, so maybe this constant term creates a different flow through the vector field. Let ##y(t)## be another solution of the differential equation, i.e. ##\dot y(t)=Ay(t)\, , \,y(t_0)=x_0.## Now why is ##x(t)=y(t)## for ##t>t_0\,?## Something like a repeller might happen at ##x_0##. I guess you would have noticed if you had used ##x_0## as required, and not simply assumed ##x(t_0)=?## in your proof. It is in fact the major part of this theorem, that the initial condition already determines the entire flow!
Sorry, but I don’t quite understand. If ##\mathbf{x}(t_0)=\mathbf{y}(t_0)=\mathbf{x}_0##, then ##\mathbf{x}^{(k)}(t_0)=\mathbf{y}^{(k)}(t_0)=A^k\,\mathbf{x}_0##, which gives rise to the Taylor series, which only depends on ##\mathbf{x}_0, t_0,## and ##t##. I used Taylor’s theorem to prove that any solution to ##\dot{\mathbf{x}}(t)=A\,\mathbf{x}(t)## with ##\mathbf{x}(t_0)=\mathbf{x}_0## converges pointwise to ##e^{(t-t_0)A}\,\mathbf{x}_0##, which includes both ##\mathbf{x}(t)## and ##\mathbf{y}(t)##. Unless I am missing something, this proves uniqueness of the solution I gave, because the difference between any arbitrary solution and my solution converges pointwise to 0.

Edit: corrected a typo; wrote ##\mathbf{x}_0## instead of ##A^k\,\mathbf{x}_0##.

suremarc said:
Sorry, but I don’t quite understand. If ##\mathbf{x}(t_0)=\mathbf{y}(t_0)=\mathbf{x}_0##, then ##\mathbf{x}^{(k)}(t_0)=\mathbf{y}^{(k)}(t_0)=A^k\,\mathbf{x}_0##, which gives rise to the Taylor series, which only depends on ##\mathbf{x}_0, t_0,## and ##t##. I used Taylor’s theorem to prove that any solution to ##\dot{\mathbf{x}}(t)=A\,\mathbf{x}(t)## with ##\mathbf{x}(t_0)=\mathbf{x}_0## converges pointwise to ##e^{(t-t_0)A}\,\mathbf{x}_0##, which includes both ##\mathbf{x}(t)## and ##\mathbf{y}(t)##. Unless I am missing something, this proves uniqueness of the solution I gave, because the difference between any arbitrary solution and my solution converges pointwise to 0.

Edit: corrected a typo; wrote ##\mathbf{x}_0## instead of ##A^k\,\mathbf{x}_0##.
You did not prove that this series is the only possibility. You have proven: the Taylor series provides one solution, not all.

fresh_42 said:
You did not prove that this series is the only possibility. You have proven: the Taylor series supplies 1 solution, not all.
The entire second half of the proof is dedicated to the proposition that any solution ##\mathbf{x}(t)## is equal to its Taylor series at ##t_0##, which only depends on ##\mathbf{x}_0, t_0,## and ##t##.

suremarc said:
Sorry, but I don’t quite understand. If ##\mathbf{x}(t_0)=\mathbf{y}(t_0)=\mathbf{x}_0##, then ##\mathbf{x}^{(k)}(t_0)=\mathbf{y}^{(k)}(t_0)=A^k\,\mathbf{x}_0##, which gives rise to the Taylor series, which only depends on ##\mathbf{x}_0, t_0,## and ##t##. I used Taylor’s theorem to prove that any solution to ##\dot{\mathbf{x}}(t)=A\,\mathbf{x}(t)## with ##\mathbf{x}(t_0)=\mathbf{x}_0## converges pointwise to ##e^{(t-t_0)A}\,\mathbf{x}_0##, which includes both ##\mathbf{x}(t)## and ##\mathbf{y}(t)##. Unless I am missing something, this proves uniqueness of the solution I gave, because the difference between any arbitrary solution and my solution converges pointwise to 0.

Edit: corrected a typo; wrote ##\mathbf{x}_0## instead of ##A^k\,\mathbf{x}_0##.
My confusion comes from this part:
suremarc said:
We now prove that ##\mathbf{x}(t)=e^{(t-t_0)A}\,\mathbf{x}(t_0)##: to be more precise, $$(\forall t,t_0\in\mathbb{R})\lim_{N\rightarrow\infty}\Bigg(\mathbf{x}(t)-\sum_{k=0}^N\frac{\mathbf{x}^{(k)}(t_0)}{k!}(t-t_0)^k\Bigg)=0.$$
You prove that ##\mathbf{x}(t)## has a Taylor expansion by quoting Taylor's theorem? I didn't realize that you switched the roles of ##\mathbf{x}(t)##. In the first part you basically defined ##\mathbf{x}(t)## by the ´Taylor series of ##x_0e^{(t-t_0)A}##, and in the second part it is not this specific solution anymore but any solution. That ##\mathbf{x}(t)## is a solution is dug in the remainder term. You showed that the remainder term of any solution converges to the remainder term of the constructed solution, and all are named ##\mathbf{x}(t)##.

Beside my confusion - which I think has cleared up now - there is another question I have: The Taylor expansion is a local theorem. How can we know that the solution is a global one, since you assumed the existence of a Taylor expansion?

suremarc
fresh_42 said:
Beside my confusion - which I think has cleared up now - there is another question I have: The Taylor expansion is a local theorem. How can we know that the solution is a global one, since you assumed the existence of a Taylor expansion?
Well, Taylor’s theorem says that for any ##t, t_0## there exists a ##\xi## between ##t## and ##t_0## such that ##\mathbf{r}_k(t)=\frac{\mathbf{x}^{(k+1)}(\xi)}{(k+1)!}(t-t_0)^{k+1}##. Actually, this only applies to the one-dimensional case, so I erred there, but you can just find the ##\xi## for each coordinate of ##\mathbf{x}##. Anyway, this means that convergence of ##\mathbf{r}_N## is only pointwise, not uniform, since it depends on ##t##. So obviously for large values of ##t-t_0## the convergence is slower, but it still converges.

Edit: corrected a typo where I put ##t_0## instead of ##\xi##

suremarc said:
Well, Taylor’s theorem says that for any ##t, t_0## there exists a ##\xi## between ##t## and ##t_0## such that ##\mathbf{r}_k(t)=\frac{\mathbf{x}^{(k+1)}(\xi)}{(k+1)!}(t-t_0)^{k+1}##. Actually, this only applies to the one-dimensional case, so I erred there, but you can just find the ##\xi## for each coordinate of ##\mathbf{x}##. Anyway, this means that convergence of ##\mathbf{r}_N## is only pointwise, not uniform, since it depends on ##t##. So obviously for large values of ##t-t_0## the convergence is slower, but it still converges.

Edit: corrected a typo where I put ##t_0## instead of ##\xi##
My solution used the inverse group element. Anyway, this tiny statement is very important:

The matrix exponential function ##tA\longmapsto e^{At}## is the unique solution of the matrix differential equation
$$\dot X(t)=AX(t)\, , \,X(0)=1_{\mathbb{M}(n,\mathbb{R})}\; , \;X\, : \,\mathbb{R}\longrightarrow \mathbb{R}^{n\times n}$$

Say anyone there can't be learned something here Edit: The other matrix problem you solved (linear programming) is called Farkas Lemma.

suremarc
Since all the uni-level questions have been solved, let me set a problem:

Let ##r/s## be any positive or negative rational number, prove that ##e^{r/s}## is an irrational number. Hint: make use of the polynomial:

\begin{align*}
P_n (x) = \frac{x^n (1-x)^n}{n!} .
\end{align*}

Last edited:
I am attempting this problem, but I'd like some sort of hint, or indication I'm going down the right track.

Every 5^n will end in 25, hence every 2^m that satisfies the equation must end in 32. This occurs when m = 5 + 20k, where k is a positive integer.

I checked this for a bunch of values and only found 32 and 25. I reckon these are the only two, but I don't know how to prove it.

I need to prove that 2^{5+20k} - 7 only has prime factor 5. Does it suffice to try a bunch of divisibility tests on them and see where that takes me?

lekh2003 said:
I am attempting this problem, but I'd like some sort of hint, or indication I'm going down the right track.

Every 5^n will end in 25, hence every 2^m that satisfies the equation must end in 32. This occurs when m = 5 + 20k, where k is a positive integer.

I checked this for a bunch of values and only found 32 and 25. I reckon these are the only two, but I don't know how to prove it.

I need to prove that 2^{5+20k} - 7 only has prime factor 5. Does it suffice to try a bunch of divisibility tests on them and see where that takes me?
The basic idea to examine remainders is correct. But it's not so easy.

You should check the solutions up to ##m=5## manually. Then consider the equation modulo 64 and conclude how ##n## must look like. Then take another modulus (17) and so on.

julian said:
Let ##r/s## be any positive or negative rational number, prove that ##e^{r/s}## is an irrational number. Hint: make use of the polynomial:

\begin{align*}
P_n (x) = \frac{x^n (1-x)^n}{n!} .
\end{align*}

Further hint: There is an immediate simplification that can be made to the problem. You then move on to making use of the polynomial ##P_n (x) = x^n(1-x)^n / n!## and its derivatives. In the proof you will need to introduce another function, what function springs to mind?

Last edited:

11.1. There is a number which occurs at least five times among the differences between two numbers of ##A##.

Right,
Yes, there must be a number which occurs at least 5 times among the differences.

Proof: If we consider the pair ##(x, y)## for difference between 2 numbers, we need not consider the reverse pair ##(y, x)## since the 2 pairs have the same absolute value for the difference. In other words, it is sufficient to consider pairs ##(x, y)## such that ##y - x > 0##. Since there are 32 distinct numbers in ##A##, there will be ##_{32}C_{2} = 496## such pairs. Let ##D = (d_1, d_2, ..., d_{496})## denote the list of differences of those pairs.

All numbers in ##A## are integers in the range ##[1, 112)##, so the differences must be integers in the range ##[1, 110]##, with the maximum difference of 110 achieved for the pair ##(1, 111)##. In other words, there are utmost 111 distinct values possible for values in ##D##. If each of the 110 values occurs utmost 4 times in ##D##, utmost 440 elements of ##D## get assigned valid values, leaving ##56 > 0## elements with no value assigned (with a smaller limit on number of repeated values, even more elements of ##D## will be left without values assigned to them). It must therefore be the case that at least one value repeats more than 4 times (i.e. repeats at least 5 times) in ##D##.

Q11.2. There is a number that occurs at least six times among the differences between two numbers of ##A##.

Yes, there must be a number which occurs at least 6 times among the differences. We can prove this by showing that if no number can occur more than 5 times among differences, then a contradiction would arise.

Let ##x_1 < x_2 < \ldots < x_{31} < x_{32}## be the ordered list of 32 numbers belonging to set ##A##. The ##x_{i}##'s can be viewed as 32 distinct points on the X-axis. There are 31 pairs of adjacent points, namely ##(x_{i}, x_{i+1})##, ##1 \leq i \leq 31##. The differences corresponding to these pairs are defined by ##d_{i} = x_{i+1} - x_{i}##. By definition of ##A##, all ##x_{i}##'s are in integers in the range ##[1, 111]## and therefore ##d_{i}##'s must be integers in the range ##[1, 110]##.

Consider the sum of the differences of these 31 pairs, ##S = \sum_{i=1}^{31} d_{i}##. ##S## gets its minimum value ##S_{\text{min}}## when the values of ##d_{i}##'s are minimized subject to constraints imposed by the problem statement. If differences can take only integer values in ##[1, 110]## and no number can occur more than 5 times among differences, then ##S_{\text{min}} = 5 \times (1 + 2 + 3 + 4 + 5 + 6) + 7 = 112##. Here we use the fact that the 31 differences need to have at least 7 distinct values given that each value can occur at most 5 times and that the 6 smallest allowed values, namely 1 to 6, are the ones that should repeat the most for the sum to be minimized.

Now ##S = \sum_{i=1}^{31} d_{i} = 112 \Rightarrow \sum_{i=1}^{31} (x_{i+1} - x_{i}) = 112 \Rightarrow (x_{32}-x_{1}) = 112##. But this is a contradiction because since ##1 \leq x_{1} < x_{32} \leq 111##, we must have ##1 \leq (x_{32}-x_{1}) \leq 110##. The contradiction arises because by limiting the number of times the same value can occur among differences, we are forcing the sum of differences (and thereby the differences corresponding to non-adjacent points) to take larger values that what would be possible if smaller values were allowed to repeat more times among the differences. And by limiting to 5, we are violating the constraint on the range of ##x_{i}##'s. Hence, it must be the case that at least one value occurs more than 5 times, i.e. at least 6 times among the differences.

fresh_42
Observation 1: ##2\sqrt{n} > \sqrt{n-1} + \sqrt{n+1} \ \, \forall{n \geq 1} ##
Proof: ##(\sqrt{n-1} + \sqrt{n+1})^2 = (n-1) + (n+1) + 2\sqrt{n^2 - 1} = 2n + 2\sqrt{n^2 - 1} < 2n + 2n##
Applying square root on both sides, we see that ##{\sqrt{n-1} + \sqrt{n+1}} < \sqrt{4n} \equiv 2 \sqrt{n}##

It is easy to see from definition of ##\lfloor{a}\rfloor## that if ##x < y##, then ##\lfloor{x}\rfloor \leq \lfloor{y}\rfloor## (since the smallest integer greater than or equal to ##y## will be greater than ##x##). Combining this property with observation 1 leads leads to:
##\lfloor{\sqrt{n-1} + \sqrt{n+1}}\rfloor \leq \lfloor 2\sqrt{n} \rfloor \ \forall{n \geq 1} ##

Observation 2: ##\sqrt{n-1} + \sqrt{n+1} > 2 \sqrt{n} - 1 \ \, \forall{n \geq 1} ##
Proof: ##(\sqrt{n-1} + \sqrt{n+1})^2 = 2n + 2\sqrt{(n - 1)(n+1)} > 2n + 2(n-1)## since ##(n-1)(n+1) > (n-1)^2 \ \forall n \geq 1##. This implies ##(\sqrt{n-1} + \sqrt{n+1})^2 > 4n - 2 > (2\sqrt{n} - 1)^2##, since ##(2\sqrt{n} - 1)^2 = 4n - (4\sqrt{n} - 1)## and ##(4\sqrt{n} - 1) > 2## when ##n \geq 1##. Applying square root on both sides, we get ##\sqrt{n-1} + \sqrt{n+1} > 2\sqrt{n} - 1 \ \, \forall{n \geq 1}##

Combining observations (1) and (2), we get ##(2 \sqrt{n} - 1) < (\sqrt{n-1} + \sqrt{n+1}) < 2\sqrt{n}## (Eq. 1)

Suppose ##\lfloor \sqrt{n} \rfloor = m## for some ##m \in \mathbb{N}##. Then ##m^2## must be the largest perfect square less than or equal to ##n## and ##n = m^2 + a## for some integer ##a## such that ##0 \leq a \leq 2m##. We get the upper bound of ##a## by noting that if ##a## is higher even by 1, we get ##n = m^2 + 2m +1 = (m+1)^2##, making ##(m+1)^2## the largest perfect square less than or equal to ##n##, contradicting the earlier assumption.

Equivalently, we must have ##\sqrt{n} = m + \epsilon## where ##0 \leq \epsilon < 1##, with ##\epsilon = 0## corresponding to ##a = 0## and ##\epsilon## increasing as ##a## increases.

We now consider 3 different cases that cover all possible values of ##n## such that ##\lfloor \sqrt{n} \rfloor = m## and calculate ##\lfloor 2\sqrt{n} \rfloor - \lfloor{\sqrt{n-1} + \sqrt{n+1}}\rfloor## in each.

Define ##Y = \sqrt{n-1} + \sqrt{n+1}##.

Case 1: ##\epsilon = 0 \equiv a = 0##
Here, ##\sqrt{n} = m \Rightarrow 2\sqrt{n} = 2m \Rightarrow \lfloor 2\sqrt{n} \rfloor = 2m##.
Using (Eq. 1) we get ##2m - 1 < Y < 2m \Rightarrow \lfloor Y \rfloor = 2m - 1 \Rightarrow \lfloor 2\sqrt{n} \rfloor - \lfloor Y \rfloor = 2m - (2m - 1) = 1##.

Case 2: ##0 < \epsilon < 0.5##
This corresponds to ##1 \leq a \leq m##, since if ##a \geq (m+1)##, we get ##n \geq (m^2 + m + 1) > (m + 0.5)^2 \Rightarrow \sqrt{n} > m + 0.5##.

##2m < 2\sqrt{n} = (2m + 2\epsilon) < (2m+1) \Rightarrow \lfloor 2\sqrt{n} \rfloor = 2m##
Since ##a \geq 1## in this case, we get ##Y \geq \sqrt{(m^2+1-1)} + \sqrt{m^2+1+1} > 2m##. Combining with (Eq. 1), or alternatively by using the fact that ##\epsilon < 0.5##, we get ##2m < Y < 2\sqrt{n} < 2m+1 \Rightarrow \lfloor Y \rfloor = 2m##.

Hence, ##\lfloor 2\sqrt{n} \rfloor - \lfloor Y \rfloor = 2m - 2m = 0## in this case.

Case 3: ##0.5 < \epsilon < 1##
This corresponds to ##(m+1) \leq a \leq 2m##, since if ##a \geq (2m+1)##, we get ##n \geq (m^2 + 2m + 1) \Rightarrow \sqrt{n} \geq (m + 1)## which contradicts ##\lfloor \sqrt{n} \rfloor = m##. Also note that we consider ##\epsilon > 0.5## rather than ##\epsilon \geq 0.5## since ##\epsilon = 0.5## does not correspond to an integer value of ##n##.

In this case, ##2(m+0.5) < 2\sqrt{n} = 2(m + 2\epsilon) < 2(m+1) \Rightarrow \lfloor 2\sqrt{n} \rfloor = 2m + 1##.
Since ##a \geq (m+1)## in this case, we get ##Y \geq \sqrt{(m^2+m+1-1)} + \sqrt{m^2+m+1+1}##.

##Y^2 \geq \left(m^2 + m + m^2 + m + 2 + 2\sqrt{(m^2 + m)(m^2 + m + 2)}\right) > (2m^2 + 2m + 2 + 2(m^2 + m))##, where we use the fact that ##(m^2 + m)(m^2 + m + 2) > (m^2 + m)^2## for ##m > 0##. This yields ##Y^2 > (4m^2 + 4m + 2) > (2m + 1)^2 \Rightarrow Y > (2m+1)##.

Combining this inequality with (Eq. 1), we get ##(2m+1) < Y < 2\sqrt{n} < 2m+2 \Rightarrow \lfloor Y \rfloor = 2m+1##.

Hence, ##\lfloor 2\sqrt{n} \rfloor - \lfloor Y \rfloor = (2m +1) - (2m +1)= 0## in this case too.

Answer: Combining the observations from the 3 cases, we see that ##f(n) = 1## if ##n## is a perfect square and ##f(n) = 0## otherwise (i.e. if ##n## is natural number that is not a perfect square).

• Math Proof Training and Practice
Replies
61
Views
6K
• Math Proof Training and Practice
Replies
61
Views
7K
• Math Proof Training and Practice
Replies
42
Views
6K
• Math Proof Training and Practice
Replies
93
Views
10K
• Math Proof Training and Practice
Replies
80
Views
4K
• Math Proof Training and Practice
Replies
100
Views
7K
• Math Proof Training and Practice
Replies
67
Views
8K
• Math Proof Training and Practice
Replies
114
Views
6K
• Math Proof Training and Practice
Replies
60
Views
8K
• Math Proof Training and Practice
Replies
56
Views
7K