Math Challenge - February 2020

In summary: I am holding out for the more general solution. In summary, this conversation covers a variety of solved problems and open questions in mathematics. The topics discussed include limits, polynomial interpolation, arc-length parameterization, Green's formula, the Wirtinger inequality, uniform convergence, matrix multiplication, permutations, integrals, continuous bilinear forms, the Hilbert space, chess puzzles, and geometric angle calculations.
  • #36
Answer the following questions:
a.) How many knights can you place on a ##n\times m## chessboard such that no two attack each other?
b.) In how many different ways can eight queens be placed on a chessboard, such that no queen threatens another? Two solutions are not different, if they can be achieved by a rotation or by mirroring of the board.

I have solved part (a). I will post the answer in text form when I solve both parts. Untill then I have numbered all the equations.
-----
IMG_20200205_200702.jpg

I have taken m as vertical and n as horizontal, it doesn't matter as both will change on rotating the board.
The language of question is quite complex. If the question wants maximum value of knights tgen it will always be in second case(Case 1 will match values if m is in the form of (3x +1)) except when m is 1.
 
Physics news on Phys.org
  • #37
mmaismma said:
... when I solve both parts.
Hint: Don't try to prove the second part. An answer will do.

As to the first part: Is there a formula for all? It seems as if you could simplify your formulas, especially: what is your solution: case I or case II? And are there exceptions for small boards?
 
  • #38
Antarres said:
Well, you say that ##1,x,\ldots, x^n## is the basis, but in order to prove it, you have to prove linear independence of these functions, which basically boils down to a Vandermonde type determinant being non-singular(for concrete ##a##, I guess). So if you take this as a fact, then it is the simplest, but if you don't, you have to revert back, that's what I meant. Either way, yeah, injectivity implies surjectivity because of that argument.
It doesn't boil down to a Vandermonde argument. If ##p=a_nx^n+\ldots+a_0## is a linear combination of those functions that vanishes, then it is a polynomial that vanishes everywhere and hence is the zero polynomial, i.e. the trivial linear combination (again using the fact that you noted- the only polynomial of degree at most ##n## that vanishes at more than ##n## distinct points is the zero polynomial).
 
  • Like
Likes Antarres
  • #39
Here's my attempt at question #8. I wasn't able to solve it in terms of elementary functions.$$
\sum_{k=1}^{\infty}\frac{1}{\begin{pmatrix}
2k \\
k \\
\end{pmatrix} \\
}=\sum_{k=1}^{\infty}\frac{(k!)^2}{(2k)!}=\sum_{k=1}^{\infty}\frac{\Gamma(1+k)\Gamma(1+k)}{\Gamma(1+2k)}=\sum_{k=0}^{\infty}\frac{(1)_k(1)_k}{(1)_{2k}} \\
\\=\sqrt{\pi}\sum_{k=0}^{\infty}\frac{(1)_k}{(\frac{1}{2})_k2^{2k}}=\sqrt{\pi}\sum_{k=0}^{\infty}\frac{(1)_k(1)_k}{(\frac{1}{2})_k k!2^{2k}}
$$where the Pochhammer symbol is$$(x)_k=\frac{\Gamma(x+k)}{\Gamma(x)} $$and I have used the dimidation formula,$$(x)_{2k}=2^{2k}(\frac{x}{2})_k(\frac{1+x}{2})_k $$I invoke the the hypergeometric sum$$ _2F_1(a,b;,c;z)=\sum_{k=0}^{\infty}\frac{(a)_k(b)_k}{(c)_k}\frac{z^k}{k!}$$to find$$
\sum_{k=1}^{\infty}\frac{1}{\begin{pmatrix}
2k \\
k \\
\end{pmatrix} \\
}=\sqrt{\pi}[ _{2}F_1(1,1;\frac{1}{2};\frac{1}{4})-1]
$$I don't think I'm done yet. Because c is less than b in the above I can't use Euler's integral to evaluate the hypergeometric function. Is it possible to solve this problem in terms of elementary functions?
 
  • #40
Fred Wright said:
Here's my attempt at question #8. I wasn't able to solve it in terms of elementary functions.
It can be done with elementary functions and asked is the exact value. The trick is to use the Taylor series of a suitable function.
 
Last edited:
  • #41
Infrared said:
It doesn't boil down to a Vandermonde argument. If ##p=a_nx^n+\ldots+a_0## is a linear combination of those functions that vanishes, then it is a polynomial that vanishes everywhere and hence is the zero polynomial, i.e. the trivial linear combination (again using the fact that you noted- the only polynomial of degree at most ##n## that vanishes at more than ##n## distinct points is the zero polynomial).
Yeah, you're completely right, I got so used to the arguments of the type that I mentioned(like the one with a vector space), that I completely forgot about the trivial proof you mention here. My bad heh, thanks for reminding me.
 
  • #42
I had a go at problem 8 using the generating function technique where you define

##
f (x) = \sum_{n=1}^\infty \frac{n!}{(2n)!} x^n = \sum_{n=1}^\infty a_n x^n
##

The sum we are after will then be equal to ##f (1)##. But I got the answer in terms of the error function...

Note that

##
a_{n+1} = a_n \times \frac{1}{2 (2n +1)}
##

or ##2 (2n+1) a_{n+1} = a_n##. In the following we use ##a_1 = 1/2##. First

\begin{align}
x f (x) & = \sum_{n=1}^\infty a_n x^{n+1}
\nonumber \\
& = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^{n+1}
\nonumber
\end{align}

Then

\begin{align}
x f (x) & = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^{n+1}
\nonumber \\
& = 2 \sum_{m=2}^\infty (2m -1) a_m x^m
\nonumber \\
& = 2 \sum_{m=1}^\infty (2m -1) a_m x^m - 2 (2 \times 1 - 1) a_{1} x
\nonumber \\
& = 4 \sum_{m=1}^\infty m a_m x^m - 2 f (x) - x
\nonumber \\
& = 4 x \frac{d}{dx} f (x) - 2 f (x) - x
\nonumber
\end{align}

This gives the differential equation:

##
4x \frac{d}{dx} f (x) - (x+2) f (x) - x = 0
##

or

##
\frac{d}{dx} f (x) - \frac{x+2}{4x} f (x) = \frac{1}{4} .
##

(with boundary condition ##f (0) = 0##). Now this is the familiar form

##
\frac{d}{dx} f (x) + p (x) f (x) = q (x)
##

that can be solved using the integrating factor method which uses

\begin{align}
\frac{d}{dx} \Big[ e^{\nu (x)} f (x) \Big] & = e^{\nu (x)} \Big[ \frac{d}{dx} f (x) + p (x) f (x) \Big]
\nonumber \\
& = e^{\nu (x)} q (x)
\nonumber
\end{align}

where ##\nu (x) = \int p (x) dx##. We will solve this with

##
f (x) = e^{- \nu (x)} \int e^{\nu (x)} q(x) dx .
##

In our case

##
p (x) = - \frac{x+2}{4x} , \qquad q (x) = \frac{1}{4}
##

so that

\begin{align}
\nu (x) & = - \frac{x}{4} - \frac{1}{2} \ln x
\nonumber
\end{align}

meaning

##
e^{+ \nu (x)} = \frac{e^{-x/4}}{x^{1/2}}
##

and

\begin{align}
f (x) & = \frac{e^{x/4} x^{1/2}}{4} \int_0^x \frac{e^{-x/4}}{x^{1/2}} dx
\nonumber
\end{align}

(choosing the lower limit to be ##0## will ensure that ##f (0) = 0##). We now take ##x = 1##, then

\begin{align}
\sum_{n=1}^\infty \frac{n!}{(2n)!} = f (1) & = \frac{e^{1/4}}{4} \int_0^1 \frac{e^{-x/4}}{x^{1/2}} dx
\nonumber
\end{align}

At this point I introduce the error function:

\begin{align}
\text{erf} (u) & = \frac{2}{\sqrt{\pi}} \int_0^u e^{-t^2} dt
\nonumber
\end{align}

Putting ##x/4 = t^2## we obtain:

\begin{align}
\text{erf} (u) & = \frac{1}{2\sqrt{\pi}} \int_0^{4u^2} \frac{e^{-x/4}}{x^{1/2}} dx
\nonumber
\end{align}

and then setting ##u = 1/2## we obtain:

\begin{align}
2 \sqrt{\pi}\text{erf} \left( \frac{1}{2} \right) & = \int_0^{1} \frac{e^{-x/4}}{x^{1/2}} dx
\nonumber
\end{align}

So finally:

\begin{align}
\sum_{n=1}^\infty \frac{(n!)}{(2n)!} & = \frac{1}{2} e^{1/4} \sqrt{\pi}\text{erf} \left( \frac{1}{2} \right) .
\nonumber
\end{align}

EDIT: Oops, the question is ##\sum_{n=1}^\infty \frac{(n!)^2}{(2n)!}##?
 
Last edited:
  • #43
@Antarres A small note: you might have noticed that we used the same fact twice (a nonzero polynomial of degree at most ##n## vanishes no more than ##n## times). This is in fact a little redundant. One can define ##\mathbb{P}_n## just as the vector space of formal expressions of the form ##a_0+\ldots+a_nx^n##, without necessarily interpreting them as functions. Then just by definition ##1,\ldots,x^n## form a basis. We still have the linear map ##p\mapsto (p(a_0),\ldots,p(a_n))##, and the argument for it being injective is the same. We didn't need to assume a prior that ##1,\ldots,x^n## are linearly independent as functions.

This point might seem pretty minor, but over finite fields, polynomials and polynomial functions are in fact different! For example, ##x## and ##x^p## define the same function over ##\mathbb{Z}/p## but we still want them to be distinct polynomials.
 
  • Like
Likes Antarres
  • #44
Thank you @Pi-is-3! Let me see if I can adjust my argument.
 
  • Like
Likes Pi-is-3
  • #45
fresh_42 said:
Hint: Don't try to prove the second part. An answer will do.

As to the first part: Is there a formula for all? It seems as if you could simplify your formulas, especially: what is your solution: case I or case II? And are there exceptions for small boards?

If my answer is correct part (b) is very simple.
-----
Required number = Total selections - Selections where one queen is attacked.
##=^{64}\text{C}_8-^8\text{C}_2##
 
  • #46
mmaismma said:
If my answer is correct part (b) is very simple.
-----
Required number = Total selections - Selections where one queen is attacked.
##=^{64}\text{C}_8-^8\text{C}_2##
What is this number? It looks rather big. The number we are looking for is below 100, and below 20 for non symmetric solutions. The correct answer will do.
 
  • #47
julian said:
EDIT: Oops, the question is ∑∞n=1(n!)2(2n)!∑n=1∞(n!)2(2n)!?
Look for the Taylor series of a trigonometric function to deal with the coefficient.
 
  • #48
I have done problem 8 using the generating function technique where you define

##
f (x) = \sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} x^n = \sum_{n=1}^\infty a_n x^n
##

The sum we are after will then be equal to ##f (1)##.

Note that

##
a_{n+1} = a_n \times \frac{n+1}{2 (2n +1)}
##

or ##2 (2n+1) a_{n+1} = (n+1) a_n##. In the following we use ##a_1 = 1/2##. First

\begin{align}
\left( x \frac{d}{dx} + 1 \right) f (x) & = \sum_{n=1}^\infty (n+1) a_n x^n
\nonumber \\
& = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^n
\nonumber
\end{align}

Then

\begin{align}
x \left( x \frac{d}{dx} + 1 \right) f (x) & = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^{n+1}
\nonumber \\
& = 2 \sum_{m=2}^\infty (2m -1) a_m x^m
\nonumber \\
& = 2 \sum_{m=1}^\infty (2m -1) a_m x^m - 2 (2 \times 1 - 1) a_{1} x
\nonumber \\
& = 4 \sum_{m=1}^\infty m a_m x^m - 2 f (x) - x
\nonumber \\
& = 4 x \frac{d}{dx} f (x) - 2 f (x) - x
\nonumber
\end{align}

This gives the differential equation:

##
(x^2 - 4x) \frac{d}{dx} f (x) + (x+2) f (x) + x = 0
##

or

##
\frac{d}{dx} f (x) + \frac{x+2}{x^2 - 4x} f (x) = \frac{1}{x - 4} .
##

(with boundary condition ##f (0) = 0##). Now this is the familiar form

##
\frac{d}{dx} f (x) + p (x) f (x) = q (x)
##

that can be solved using the integrating factor method which uses

\begin{align}
\frac{d}{dx} \Big[ e^{\nu (x)} f (x) \Big] & = e^{\nu (x)} \Big[ \frac{d}{dx} f (x) + p (x) f (x) \Big]
\nonumber \\
& = e^{\nu (x)} q (x)
\nonumber
\end{align}

where ##\nu (x) = \int p (x) dx##. We will solve this with

##
f (x) = e^{- \nu (x)} \int e^{\nu (x)} q(x) dx .
##

In our case

##
p (x) = - \frac{1}{2x} + \frac{3}{2} \frac{1}{x-4} , \qquad q (x) = \frac{1}{x-4}
##

so that

\begin{align}
\nu (x) & = - \frac{1}{2} \ln x + \frac{3}{2} \ln (x-4)
\nonumber
\end{align}

meaning

\begin{align}
e^{+ \nu (x)} & = \left( \frac{(x-4)^3}{x} \right)^{1/2}
\nonumber
\end{align}

and

\begin{align}
f (x) & = \left( \frac{x}{(x-4)^3} \right)^{1/2} \int_0^x \left( \frac{(x-4)^3}{x} \right)^{1/2} \frac{1}{[(x-4)^2]^{1/2}} dx
\nonumber \\
& = \left( \frac{x}{(4-x)^3} \right)^{1/2} \int_0^x \frac{(4-x)^{1/2}}{x^{1/2}} dx
\nonumber
\end{align}

(choosing the lower limit to be ##0## will ensure that ##f (0) = 0##). We now take ##x = 1##, then

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} = f (1) & = \left( \frac{1}{(3)^3} \right)^{1/2} \int_0^1 \frac{(4-x)^{1/2}}{x^{1/2}} dx
\nonumber \\
& = \left( \frac{1}{27} \right)^{1/2} \int_0^1 \frac{(4-x)^{1/2}}{x^{1/2}} dx
\nonumber
\end{align}

Put ##u = \sqrt{x}##. Then ##du = \frac{1}{2} \frac{dx}{\sqrt{x}}## and

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} & = 2 \left( \frac{1}{27} \right)^{1/2} \int_0^1 (4-u^2)^{1/2} du
\nonumber \\
& = 2 \left( \frac{1}{27} \right)^{1/2} I
\nonumber
\end{align}

where

\begin{align}
I & = \int_0^1 (4-u^2)^{1/2} du
\nonumber
\end{align}

Integrating by parts gives:

\begin{align}
I & = \int_0^1 (4-u^2)^{1/2} du
\nonumber \\
& = [u (4-u^2)^{1/2}]_0^1 + \int_0^1 \frac{1}{2} \cdot 2 u \cdot \frac{u}{(4-u^2)^{1/2}} du
\nonumber \\
& = \sqrt{3} - \int_0^1 \frac{4 - u^2}{(4-u^2)^{1/2}} du + 4 \int_0^1 \frac{1}{(4-u^2)^{1/2}} du
\nonumber \\
& = \sqrt{3} - I + 4 \int_0^1 \frac{1}{(4-u^2)^{1/2}} du
\nonumber \\
& = \sqrt{3} - I + 4 \left[ \sin^{-1} (u/2) \right]_0^1
\nonumber \\
& = \sqrt{3} - I + 2 \frac{\pi}{3}
\nonumber
\end{align}

Implying:

\begin{align}
I & = \frac{1}{2}\sqrt{3} + \frac{\pi}{3}
\nonumber
\end{align}

So finally:

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} & = 2 \left( \frac{1}{27} \right)^{1/2} I
\nonumber \\
& = 2 \left( \frac{1}{27} \right)^{1/2} \left( \frac{1}{2}\sqrt{3} + \frac{\pi}{3} \right)
\nonumber \\
& = \frac{1}{27} \left( 9 + 2 \sqrt{3} \pi \right) .
\nonumber
\end{align}
 
  • Like
Likes TUSHAR Singy, Pi-is-3 and fresh_42
  • #49
julian said:
I have done problem 8 using the generating function technique where you define

##
f (x) = \sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} x^n = \sum_{n=1}^\infty a_n x^n
##

The sum we are after will then be equal to ##f (1)##.

Note that

##
a_{n+1} = a_n \times \frac{n+1}{2 (2n +1)}
##

or ##2 (2n+1) a_{n+1} = (n+1) a_n##. In the following we use ##a_1 = 1/2##. First

\begin{align}
\left( x \frac{d}{dx} + 1 \right) f (x) & = \sum_{n=1}^\infty (n+1) a_n x^n
\nonumber \\
& = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^n
\nonumber
\end{align}

Then

\begin{align}
x \left( x \frac{d}{dx} + 1 \right) f (x) & = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^{n+1}
\nonumber \\
& = 2 \sum_{m=2}^\infty (2m -1) a_m x^m
\nonumber \\
& = 2 \sum_{m=1}^\infty (2m -1) a_m x^m - 2 (2 \times 1 - 1) a_{1} x
\nonumber \\
& = 4 \sum_{m=1}^\infty m a_m x^m - 2 f (x) - x
\nonumber \\
& = 4 x \frac{d}{dx} f (x) - 2 f (x) - x
\nonumber
\end{align}

This gives the differential equation:

##
(x^2 - 4x) \frac{d}{dx} f (x) + (x+2) f (x) + x = 0
##

or

##
\frac{d}{dx} f (x) + \frac{x+2}{x^2 - 4x} f (x) = \frac{1}{x - 4} .
##

(with boundary condition ##f (0) = 0##). Now this is the familiar form

##
\frac{d}{dx} f (x) + p (x) f (x) = q (x)
##

that can be solved using the integrating factor method which uses

\begin{align}
\frac{d}{dx} \Big[ e^{\nu (x)} f (x) \Big] & = e^{\nu (x)} \Big[ \frac{d}{dx} f (x) + p (x) f (x) \Big]
\nonumber \\
& = e^{\nu (x)} q (x)
\nonumber
\end{align}

where ##\nu (x) = \int p (x) dx##. We will solve this with

##
f (x) = e^{- \nu (x)} \int e^{\nu (x)} q(x) dx .
##

In our case

##
p (x) = - \frac{1}{2x} + \frac{3}{2} \frac{1}{x-4} , \qquad q (x) = \frac{1}{x-4}
##

so that

\begin{align}
\nu (x) & = - \frac{1}{2} \ln x + \frac{3}{2} \ln (x-4)
\nonumber
\end{align}

meaning

\begin{align}
e^{+ \nu (x)} & = \left( \frac{(x-4)^3}{x} \right)^{1/2}
\nonumber
\end{align}

and

\begin{align}
f (x) & = \left( \frac{x}{(x-4)^3} \right)^{1/2} \int_0^x \left( \frac{(x-4)^3}{x} \right)^{1/2} \frac{1}{[(x-4)^2]^{1/2}} dx
\nonumber \\
& = \left( \frac{x}{(4-x)^3} \right)^{1/2} \int_0^x \frac{(4-x)^{1/2}}{x^{1/2}} dx
\nonumber
\end{align}

(choosing the lower limit to be ##0## will ensure that ##f (0) = 0##). We now take ##x = 1##, then

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} = f (1) & = \left( \frac{1}{(3)^3} \right)^{1/2} \int_0^1 \frac{(4-x)^{1/2}}{x^{1/2}} dx
\nonumber \\
& = \left( \frac{1}{27} \right)^{1/2} \int_0^1 \frac{(4-x)^{1/2}}{x^{1/2}} dx
\nonumber
\end{align}

Put ##u = \sqrt{x}##. Then ##du = \frac{1}{2} \frac{dx}{\sqrt{x}}## and

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} & = 2 \left( \frac{1}{27} \right)^{1/2} \int_0^1 (4-u^2)^{1/2} du
\nonumber \\
& = 2 \left( \frac{1}{27} \right)^{1/2} I
\nonumber
\end{align}

where

\begin{align}
I & = \int_0^1 (4-u^2)^{1/2} du
\nonumber
\end{align}

Integrating by parts gives:

\begin{align}
I & = \int_0^1 (4-u^2)^{1/2} du
\nonumber \\
& = [u (4-u^2)^{1/2}]_0^1 + \int_0^1 \frac{1}{2} \cdot 2 u \cdot \frac{u}{(4-u^2)^{1/2}} du
\nonumber \\
& = \sqrt{3} - \int_0^1 \frac{4 - u^2}{(4-u^2)^{1/2}} du + 4 \int_0^1 \frac{1}{(4-u^2)^{1/2}} du
\nonumber \\
& = \sqrt{3} - I + 4 \int_0^1 \frac{1}{(4-u^2)^{1/2}} du
\nonumber \\
& = \sqrt{3} - I + 4 \left[ \sin^{-1} (u/2) \right]_0^1
\nonumber \\
& = \sqrt{3} - I + 2 \frac{\pi}{3}
\nonumber
\end{align}

Implying:

\begin{align}
I & = \frac{1}{2}\sqrt{3} + \frac{\pi}{3}
\nonumber
\end{align}

So finally:

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{(2n)!} & = 2 \left( \frac{1}{27} \right)^{1/2} I
\nonumber \\
& = 2 \left( \frac{1}{27} \right)^{1/2} \left( \frac{1}{2}\sqrt{3} + \frac{\pi}{3} \right)
\nonumber \\
& = \frac{1}{27} \left( 9 + 2 \sqrt{3} \pi \right) .
\nonumber
\end{align}
Correct, well done. The shortest way is to choose the series for ##\arcsin^2 z## and applying ##z\dfrac{d}{dz}##twice.
 
  • #50
Considering that the sequence of functions ##f_n(x) = f(x+n)## converges uniformly, we observe the following possible properties of ##f##.
##f## can either have a finite limit at infinity, or it must be periodic with a specific period of ##\frac{1}{n}## for ##n \in \mathbb{N}##. We will explore different cases and show that this is the case and the consequences of this.
If the sequence ##f(x+n)## converges to ##g(x)## uniformly, that means that if we take any ##\varepsilon > 0## we can find ##N## which is independent of ##x##, such that:
$$n\geq N \Rightarrow \vert f(x+n) - g(x) \vert < \varepsilon$$
If ##f## has a limit at infinity, this uniform continuity of the sequence coincides with this limit, so we see that uniform continuity of ##f_n## implies that ##g## is constant and equal to limit of ##f## at infinity at all points. This implies uniform continuity of ##g## trivially. Also, since ##f## is continuous on the whole domain and there exists ##\lim_{x \rightarrow \infty} f(x)##, we have that we can divide the domain into two intervals, ##[0,N]## and ##(N, \infty)## for some ##N \in \mathbb{N}##. If a function is continuous on a closed finite interval, then it is uniformly continuous(Heine-Cantor theorem). Hence this partition guarantees that ##f## is uniformly continuous on ##[0,N]##. Now suppose ##\lim_{x \rightarrow \infty} f(x) = L##. We prove that ##f## is uniformly continuous on the whole domain. For given ##\varepsilon>0## we pick ##N## such that for all ##x>N##, ##\vert f(x) - L \vert < \frac{\varepsilon}{2}##. Then for any two points ##x,y \in (N,\infty)##, we have:
$$\vert x-y \vert <\delta \Rightarrow \vert f(x) - f(y) \vert = \vert f(x) - L + L - f(y) \vert \leq \vert f(x) - L\vert + \vert f(y) - L \vert < \varepsilon$$
where we used inequality of the triangle and our limit definition.
Therefore, we have no problems with uniform continuity if ##x## and ##y## are both in ##(N,\infty)##.
In case where ##x## is from ##[0,N]## and ##y## is from ##(N,\infty)##, we have:
Choose ##\delta_1## such that $$\vert x-N \vert <\delta_1 \Rightarrow \vert f(x) - f(N) \vert <\frac{\varepsilon}{2}$$
$$\vert y-N\vert <\delta_1 \Rightarrow \vert f(y) - f(N) \vert <\frac{\varepsilon}{2}$$
which we can do from continuity of ##f##.
Then summing the inequalities above, and invoking triangle inequality, asserts uniform continuity as in the previous case.
Hence ##f## is uniformly continuous on the whole domain.

In the case where ##f## has no limit at infinity, it can either increase or decrease without bound(in which case ##f_n## wouldn't converge uniformly, so we don't take this case into consideration), or it can oscillate.
In case of oscillation, it is either periodic with period ##T## or aperiodic.

If ##T## was irrational, then integral multiples of it would also be irrational, so ##f(x+n)## would take on completely different values for every ##n##, an infinity of them, so oscillations of ##f## at infinity would imply oscillations of ##f(x+n)## for fixed ##x##, so it wouldn't converge. Same goes if the function was oscillating, but not being periodic(like for example ##\sin(x^2)##).
If ##T## was rational and not of form ##1/m##, ##m \in \mathbb{N}##, then ##f(x+n)## would take on finite different values at different ##n##, precisely ##k## values, where ##k## is the smallest natural number such that ##kT \in \mathbb{N}##. This would also mean oscillation at fixed ##x## for all ##n\geq N## for some chosen ##N##, hence we wouldn't have uniform convergence of ##f_n## in this case either.

If ##T=\tfrac{1}{k}## for ##k \in \mathbb{N}##, that means that ##f(x+n) = f(x)##, because every ##n## contains an integer number of periods in it. In this case ##f(x) = g(x)##. Also, this periodicity implies boundedness in case ##f## has no vertical asymptotes(same condition we assumed in the first case, that it is defined on its domain). In this case we can take the closed interval in the domain the size of ##T##, and since ##f## is continuous, it implies it is uniformly continuous on this interval, and then by continuity and periodicity, it is uniformly continuous everywhere. Same goes for ##g## as they are equal.
 
Last edited:
  • #51
fresh_42 said:
View attachment 256406

High Schoolers only

11.
Answer the following questions:
a.) How many knights can you place on a ##n\times m## chessboard such that no two attack each other?
That is a good high school question.
fresh_42 said:
b.) In how many different ways can eight queens be placed on a chessboard, such that no queen threatens another? Two solutions are not different, if they can be achieved by a rotation or by mirroring of the board.
Surely this one is a bit too hard though?
 
  • #52
pbuk said:
Surely this one is a bit too hard though?
Yes. It's an experiment. I will accept a correct answer, without proof.
 
  • #53
Just an edit, now that enough time has passed so I can't edit the original post.
Antarres said:
If ##f## has a limit at infinity, this uniform continuity of the sequence coincides with this limit, so we see that uniform continuity of ##f_n## implies that ##g## is constant and equal to limit of f##f## at infinity at all points.
Here it's supposed to be 'uniform convergence' of the sequence, not continuity of its functions..
 
  • #54
Antarres said:
If ##f## has a limit at infinity
##\cdots##
Hence ##f## is uniformly continuous on the whole domain.
This part of the argument is OK but it's probably easier to use overlapping intervals like ##[0,2N]## and ##[N,\infty)## so that you don't have to worry about the case that ##x## and ##y## belong to differenti intervals.

However, I think you're handwaving a bit in the case where there isn't a limit at infinity. In particular, the following seems to be false:

Antarres said:
Same goes if the function was oscillating, but not being periodic(like for example ##\sin(x^2)##).
The function ##f(x)=\sin(2\pi x)+e^{-x}## is not periodic, but ##f_n(x)=\sin(2\pi x)+e^{-(n+x)}## converges uniformly to ##\sin(2\pi x)##.
 
  • Like
Likes Antarres
  • #55
Thanks for the remark, now that I reread what I wrote, I notice it sounds a bit handwavy. I should've refined it a bit before posting. So, I will revisit the case where no limit exists here, assuming the case with finite limit to be established(and what you mentioned about intervals makes it easier indeed, I was a bit hasty when I wrote this, so I didn't ponder on it much, thanks for that remark).
In case where ##f(x)## has no limit as ##x## approaches infinity, it can either increase/decrease without bound or oscillate. The case of divergent increase/decrease, we have disbanded since that would violate the condition that ##f(x+n)## converges uniformly as ##n \rightarrow \infty##. The oscillation case remains.First we observe that, if ##f(x)## is periodic with period ##\frac{1}{k}##, ##k \in \mathbb{N}##, then for every ##n \in \mathbb{N}##, ##f(x+n)=f(x)##, because ##n## is an integer multiple of the period. Hence the sequence ##f_n(x)## is constant for fixed ##x##, so it converges uniformly to ##g(x)=f(x)##. Since we assume ##f(x)## is well defined and continuous on its domain, we have that, since it is continuous on a closed interval ##[0,T]##, it is uniformly continuous on that interval. Then by periodicity and continuity, it is uniformly continuous on the whole domain. So both ##f## and ##g## are uniformly continuous.The idea of why we picked specific forms for function ##f(x)## in the two cases we considered before, is to narrow down possible representations of ##g(x)##. Namely, if ##f## is oscillating, then the sequence ##f_n(x)=f(x+n)## is surely oscillating with the change of both ##x## and ##n##, behaving the same way ##f## does. By maintaining that ##n## is an integer multiple of the period of ##f##, we obtain uniform convergence, because we're making the sequence constant with ##n##, but if we allow ##f_n## to oscillate with ##n##, we won't have any function that it would converge to, because we can't suppress its oscillation at infinity otherwise, since it is governed by the behavior of ##f## which is preset. Therefore, we conclude that the general form of ##g(x)## must be a combination of the cases we found above.

So the general form of ##g(x)## which is asymptotic form of ##f(x)## at infinity, is ##g(x)=ap(x)+b##, where ##a## and ##b## are constants, and ##p## is a function whose period is of form ##T=\frac{1}{k}## for some natural number ##k##. This form is uniformly continuous by what we have proven before, when we considered periodic case of ##f(x)##.

As it is the limit of ##f(x)## at infinity, the form of ##f(x)## would in general be
$$f(x)=s(x)p(x)+t(x)$$
where ##s## and ##t## are continuous functions with the same domain as ##f##, who have finite limits as ##x \rightarrow \infty##, and ##p## is a periodic continuous function with period ##T=\frac{1}{k}## for some natural number ##k##. By the two cases we considered before, all three of these functions are uniformly continuous, hence their combination obtained via addition and multiplication is also uniformly continuous(this is a well known property of uniform continuity).

This concludes our proof, we don't have any possible malfunctioning cases now, I believe.
 
Last edited:
  • #56
fresh_42 said:
12. Determine (with justification, but without explicit calculation) which of
a.) ##1000^{1001}## and ##{1002}^{1000}##
b.) ##e^{0.000009}-e^{0.000007}+e^{0.000002}-e^{0.000001}## and ##e^{0.000008}-e^{0.000005}##
is larger.

I haven't completed part a), however for part b),

Let ##u = e^{0.000001}## such that

##A = e^{0.000009}-e^{0.000007}+e^{0.000002}-e^{0.000001} = u^{9} - u^{7} + u^{2} - u##
##B = e^{0.000008}-e^{0.000005} = u^{8} - u^{5}##

Now considering ##A - B##,
##A - B = u^9 - u^8 - u^7 + u^5 + u^2 - u = u(u+1)(u-1)^{2}(u^5 - u^2-1)##

Since ##u##, ##u+1## and ##(u-1)^{2}## are all positive, we just need to work out the sign of the final term.

##(u^5 - u^2 -1) = u^2(u^3-1) -1##

Since ##u = e^{0.000001}## is only a little greater than 1, ##(u^3-1)## is going to be very close to 0+, then on multiplication by ##u^{2}## - which is also very close to 1 - and finally subtraction of the one, this term will become negative. And a negative ##A-B## implies the second sum is larger than the first.
 
  • Like
Likes Pi-is-3
  • #57
etotheipi said:
I haven't completed part a), however for part b),

Let ##u = e^{0.000001}## such that

##A = e^{0.000009}-e^{0.000007}+e^{0.000002}-e^{0.000001} = u^{9} - u^{7} + u^{2} - u##
##B = e^{0.000008}-e^{0.000005} = u^{8} - u^{5}##

Now considering ##A - B##,
##A - B = u^9 - u^8 - u^7 + u^5 + u^2 - u = u(u+1)(u-1)^{2}(u^5 - u^2-1)##

Since ##u##, ##u+1## and ##(u-1)^{2}## are all positive, we just need to work out the sign of the final term.

##(u^5 - u^2 -1) = u^2(u^3-1) -1##

Since ##u = e^{0.000001}## is only a little greater than 1, ##(u^3-1)## is going to be very close to 0+, then on multiplication by ##u^{2}## - which is also very close to 1 - and finally subtraction of the one, this term will become negative. And a negative ##A-B## implies the second sum is larger than the first.

So much better than what I was going to post. I just expanded ## e^x ## upto first 3 terms to get the second greater than first.
 
  • #58
Pi-is-3 said:
So much better than what I was going to post. I just expanded #e^x# upto first 3 terms to get the second greater than first.

That sounds as though it could be a good solution as well though!
 
  • #59
etotheipi said:
I haven't completed part a), however for part b),

Let ##u = e^{0.000001}## such that

##A = e^{0.000009}-e^{0.000007}+e^{0.000002}-e^{0.000001} = u^{9} - u^{7} + u^{2} - u##
##B = e^{0.000008}-e^{0.000005} = u^{8} - u^{5}##

Now considering ##A - B##,
##A - B = u^9 - u^8 - u^7 + u^5 + u^2 - u = u(u+1)(u-1)^{2}(u^5 - u^2-1)##

Since ##u##, ##u+1## and ##(u-1)^{2}## are all positive, we just need to work out the sign of the final term.

##(u^5 - u^2 -1) = u^2(u^3-1) -1##

Since ##u = e^{0.000001}## is only a little greater than 1, ##(u^3-1)## is going to be very close to 0+, then on multiplication by ##u^{2}## - which is also very close to 1 - and finally subtraction of the one, this term will become negative. And a negative ##A-B## implies the second sum is larger than the first.
A bit more formalism than "little greater than"
##1<u<u^2<u^5 =e^{0.000005} < 4^{0.000005}< 4^{0.5}=2##
would have been nice, but your solution is correct.
 
  • Like
Likes etotheipi
  • #60
mmaismma said:
I have solved part (a). I will post the answer in text form when I solve both parts. Untill then I have numbered all the equations.
-----
View attachment 256655
I have taken m as vertical and n as horizontal, it doesn't matter as both will change on rotating the board.
The language of question is quite complex. If the question wants maximum value of knights tgen it will always be in second case(Case 1 will match values if m is in the form of (3x +1)) except when m is 1.
This is close, but for ##n=6=3k## and ##m=3## you have ##6## knights, but ##9## are possible.

It is better to distinguish the cases ##m=1,m=2,m>2##.
 
  • #61
fresh_42 said:
This is close, but for ##n=6=3k## and ##m=3## you have ##6## knights, but ##9## are possible.

It is better to distinguish the cases ##m=1,m=2,m>2##.
In my answer I have already told that required number is always maximum in case 2 except for ##m=1## which will give ##9## for the given case. However if you want me to distinguish the cases as ##m=1,m=2,m>2##, here it is.
IMG_20200208_091506.jpg
 
  • Like
Likes Pi-is-3
  • #62
mmaismma said:
In my answer I have already told that required number is always maximum in case 2 except for ##m=1## which will give ##9## for the given case. However if you want me to distinguish the cases as ##m=1,m=2,m>2##, here it is.

You can simplify the formulas of n=odd.

For n and m both odd the formula can be simplified to ## \frac{mn+1}{2} ## .
For n odd, m even, the formula can be simplified to ## \frac{mn}{2} ## .

This way both the cases of n feel interconnected.
 
  • Like
Likes fresh_42
  • #63
fresh_42 said:
12. Determine (with justification, but without explicit calculation) which of
a.) ##1000^{1001}## and ##{1002}^{1000}##
is larger.
b.) ##e^{0.000009}−e^{0.000007}+e^{0.000002}−e^{0.000001}## and ##e^{0.000008}−e^{0.000005}##
is larger.

I am not entirely sure if this is an acceptable solution. This solution relies on knowing that ##2 \lt e \lt 3## and derives some bounds based on powers or logarithms based on that. In that sense, it is not free from calculation, though it does not rely on any further explicit calculation. I guess the ideal solution would use some standard identities that I am unaware of.
a.) Define ##f(x) = x ^ {\frac {3002-x} {2}}## and ##g(x) = \log f(x) = \dfrac {3002-x} {2} \log x##
It is easy to see that ##{1000}^{1001} = f(1000)## and ##{1002}^{1000} = f(1002)##

Since logarithm is a monotonically increasing function, proving that ##\log a \gt \log b## is equivalent to proving that ##a \gt b## provided logarithm is well-defined for those values.

##g'(x) = \dfrac {3002-x} {2x} - \dfrac {\log x} {2} = \
\dfrac {1501} {x} - \dfrac {1 + \log x} {2}##

For ##x \geq 1000, g'(x) \leq (1.501 - \dfrac {1 + \log 1000} {2}) \lt 1.501 - 2.5 \Rightarrow x \lt 0##, where I use the fact that ##e \lt 3##. We know that ##3^4 = 81 < 1000##. Therefore ##3^4 \lt 1000 \Rightarrow e^4 \lt 1000 \Rightarrow \log 1000 \gt 4## .

Since ##g(x)## has a negative slope for ##x \geq 1000##, ##g(y) \lt g(1000) \ \forall y \gt 1000##. Hence, ##g(1002) \lt g(1000) \Rightarrow f(1002) \lt f(1000) \Rightarrow 1000^{1001} \gt 1002^{1000}##

b.) Let ##x = e^{0.000001}## and let ##A, B## denote the 2 expressions whose values are to be compared. Then ##A - B = (e^{0.000009}−e^{0.000007}+e^{0.000002}−e^{0.000001}) - (e^{0.000008}−e^{0.000005})## is equivalent to ##x^9 - x^7 + x^2 - x - x^8 + x^5 = x(x^8 - x^6 + x - 1 - x^7 + x^4) \
= x(x^8 - x^7 - x^6 + x^4 + x - 1) = x(x^7(x-1) - x^4(x^2 - 1) + (x-1)) = x(x-1)(x^7 -x^4(x+1) + 1) \
= x(x-1)(x^7 - x^5 - x^4 + 1) = x(x-1)(x^5(x^2 - 1) - (x^2 + 1)(x^2 - 1)) = x(x-1)(x^2 - 1)(x^5 - (x^2 + 1))
##

Since ##x > 1## (##e^a \gt 1 \ \forall a \gt 0## and by definition ##x = e^{0.000001}##), it follows that ##x(x-1)(x^2 - 1) \gt 0##. We try to find whether ##(x^5 - (x^2 + 1))## is positive or not, and for this we may compare the logarithms of ##x^5## and ##(x^2 + 1)##. Note that in the derivations, by ##\log x## I mean the natural logarithm of ##x##.

##\log {x^5} = \log {e^{0.000005}} = 0.000005##

##\log {x^2 + 1} = \log {e^{0.000002} + 1} \gt \log 2##
Since ##2 \lt e \lt 3 \lt 4##, ##\log 2 \gt \log_{4} 2 = 0.5##
Since ##0.5 > 0.000005##, it follows that ##\log {x^5} \lt \log {x^2 + 1} \Rightarrow (x^5 - (x^2 + 1)) \lt 0##. Hence the value ##x(x-1)(x^2 - 1)(x^5 - (x^2 + 1)) \lt 0## implying that ##A - B \lt 0## or ##A \lt B##.

Therefore ##e^{0.000009}−e^{0.000007}+e^{0.000002}−e^{0.000001}## must be smaller that ##e^{0.000008}−e^{0.000005}##
 
  • #64
Sorry, I missed adding parentheses in one of the steps
It was meant to be read as ##\log {(x^2+1)} = \log {(e^{0.000002}+1)} > \log 2##. In the earlier post, I missed the brackets around ##x^2+1## and ##e^{0.000002}+1##
 
  • #65
mmaismma said:
In my answer I have already told that required number is always maximum in case 2 except for ##m=1## which will give ##9## for the given case. However if you want me to distinguish the cases as ##m=1,m=2,m>2##, here it is.
##m=2## and ##n=2## allows ##4\neq n## knights.
Pi-is-3 said:
For ##n## and ##m## both odd the formula can be simplified to ##\frac{mn+1}{2}##.
For ##n## odd, ##m## even, the formula can be simplified to ##\frac{mn}{2}##.
it can be further simplified to ##\lceil \dfrac{nm}{2}\rceil##.

Nobody with an idea how to find the correct number for part b)?
 
Last edited:
  • #66
Pi-is-3 said:
You can simplify the formulas of n=odd.

For n and m both odd the formula can be simplified to ## \frac{mn+1}{2} ## .
For n odd, m even, the formula can be simplified to ## \frac{mn}{2} ## .

This way both the cases of n feel interconnected.
You are right but I didn't simplify the formula intentionally. Because this way one can easily figure out how I have derived the formulas.
 
  • #67
fresh_42 said:
m=2m=2m=2 and n=2n=2n=2 allows 4≠n4≠n4\neq n knights
Yes you are right. There is an exception for such small boards that I couldn't figure out.
fresh_42 said:
it can be further simplified to ⌈nm2⌉⌈nm2⌉\lceil \dfrac{nm}{2}\rceil.
What is a 'ceil'?
 
  • #68
mmaismma said:
Yes you are right. There is an exception for such small boards that I couldn't figure out.
You have almost everything right except for ##m=2##. The example of a ##2\times 2## board can be generalized to ##n\times 2## boards. The formula is a bit tricky. I suggest to write ##n=4k+r\, , \,r=1,2,3,4##.
What is a 'ceil'?
Rounded to the next upper integer.
mmaismma said:
You are right but I didn't simplify the formula intentionally. Because this way one can easily figure out how I have derived the formulas.
You can give an easy and straight argument if you look at an actual board and observe what a knight move does.
 
  • #69
Antarres said:
In case where ##f(x)## has no limit as ##x## approaches infinity, it can either increase/decrease without bound or oscillate.
By "increase/decrease without bound" do you mean ##\lim_{x\to\infty}|f(x)|=\infty## or just ##\lim\sup_{x\to\infty}|f(x)|=\infty##? What does "oscillate" mean?

Antarres said:
First we observe that, if ##f(x)## is periodic with period ##\frac{1}{k}##, ##k \in \mathbb{N}##, then for every ##n \in \mathbb{N}##, ##f(x+n)=f(x)##, because ##n## is an integer multiple of the period. Hence the sequence ##f_n(x)## is constant for fixed ##x##, so it converges uniformly to ##g(x)=f(x)##. Since we assume ##f(x)## is well defined and continuous on its domain, we have that, since it is continuous on a closed interval ##[0,T]##, it is uniformly continuous on that interval. Then by periodicity and continuity, it is uniformly continuous on the whole domain. So both ##f## and ##g## are uniformly continuous.
I agree with this (and probably you also mean to allow functions which are eventually periodic).

Antarres said:
The idea of why we picked specific forms for function ##f(x)## in the two cases we considered before, is to narrow down possible representations of ##g(x)##. Namely, if ##f## is oscillating, then the sequence ##f_n(x)=f(x+n)## is surely oscillating with the change of both ##x## and ##n##, behaving the same way ##f## does. By maintaining that ##n## is an integer multiple of the period of ##f##, we obtain uniform convergence, because we're making the sequence constant with ##n##, but if we allow ##f_n## to oscillate with ##n##, we won't have any function that it would converge to, because we can't suppress its oscillation at infinity otherwise, since it is governed by the behavior of ##f## which is preset.
I can't figure out what mathematical claim you are making here. Can you make this precise?

Antarres said:
Therefore, we conclude that the general form of ##g(x)## must be a combination of the cases we found above.
Why?
 
  • #70
Infrared said:
By "increase/decrease without bound" do you mean ##\lim_{x\to\infty}|f(x)|=\infty## or just ##\lim\sup_{x\to\infty}|f(x)|=\infty##? What does "oscillate" mean?
Okay, well I mean the first, that ##\lim_{x\to\infty} |f(x)| = \infty##. From the way ##f_n## is constructed, we see that its convergence depends on the behavior of ##f## at infinity, and that ##g(x)## is precisely asymptotic form of ##f## at infinity.
So what I meant is, that if we have no limit, that means that either limit is in positive/negative infinity(that's what I meant by increase/decrease without bound), or the number of accumulation points is not equal to one(there's no unique pointt to which the sequence converges for fixed ##x##, violating pointwise convergence, and hence uniform convergence as well).
Infrared said:
I agree with this (and probably you also mean to allow functions which are eventually periodic).
I indeed do allow that case(in which a function eventually turns to behaving periodic), I commented on it later in the answer, when I obtain the general form of ##f##.
Infrared said:
I can't figure out what mathematical claim you are making here. Can you make this precise?

What I mean is that behavior of ##f_n##, the way its limit approaches as ##n\to\infty##, is the same way ##f## behaves as ##x\to\infty##, unless we make the sequence independent of ##n##. This is a consequence of how we constructed ##f_n## as just being a translation along ##x##-axis. So we choose that either ##f## has a finite limit, hence the sequence also has a finite limit, which makes it uniformly converge to a constant(which I addressed in the first answer), or ##f## can be periodic with a specific period that would make ##f_n## independent of ##n##. This is the case of periodic function with period ##\frac{1}{k}##. Now, this doesn't exhaust the possible behaviors of ##f##, but it does exhaust possible behaviors of ##f_n## and hence ##g##.

If we, for example, allow ##f## to be periodic with a different period from the form we mentioned, that would make ##f_n## limit undefined, since it would produce more than one accumulation point(which I addressed when I talked about general rational and irrational periods). Even if ##f## is aperiodic, it must be of such behavior that at infinity it becomes a periodic function of period ##\frac{1}{k}##, otherwise ##f_n## wouldn't converge even pointwise.

So that's what I meant, these two cases are two possibilities of how ##f_n## would converge, and from there I give the general form of ##g## which is ##aP(x) + b##. Every ##g## can be written in this general form, since ##P## is a completely general function of period ##\frac{1}{k}##, it just has to be continuous. So I considered specific cases on how ##f_n## converges and then combined them in general way.

From there, I find that then ##f## must be of general form ##s(x)P(x)+t(x)##, where ##s## and ##t## converge towards ##a## and ##b## we find in ##g##, and ##P## is just a periodic function of period ##\frac{1}{k}##, so it doesn't interrupt the convergence of ##f_n##.

All other cases can be broken down to these possibilities, the main thing is that we essentially want ##f_n## to have a well defined limit, and then we take all possible combinations of functions that give that. I think it is precise enough now, unless I missed something that is of essential importance. Also when I mention accumulation points, it is an abuse of words, since in terms of uniform convergence those 'points' are functions, while in terms of pointwise converges they are actually points. But it should be clear from the context what I mean in every particular case.
 

Similar threads

  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
Replies
33
Views
7K
  • Math Proof Training and Practice
2
Replies
64
Views
12K
  • Math Proof Training and Practice
2
Replies
69
Views
3K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
3
Replies
102
Views
7K
Back
Top