# Math Q&A Game

1. Apr 13, 2005

### Gokul43201

Staff Emeritus
A Q&A game is simple: One person asks a relevant question (it can be research, calculation, a curiosity, something off-the-top-of-the-head, anything ... as long as it's a math question) and other people try to answer. The person who posts the first correct answer (as recognized by s/he who asked the question) gets to ask the next question, and so on.

Let me start this off with a simple number theory problem :

What is the least number than must be multiplied to 100! (that's a factorial) to make it divisible by $12^{49}$ ?

(throw in a brief -couple of lines or so- explanation with the answer)

2. Apr 13, 2005

### JonF

1/(99!*50)

3. Apr 13, 2005

### Gokul43201

Staff Emeritus
Not correct. "Multiplying" by your number gives a "product" of 2 (=100!/50*99!), and 2 is not divisible by 12^{49}.

4. Apr 13, 2005

### JonF

i think i read it backwards...

5. Apr 13, 2005

### matt grime

There are 50 factors of 100 divisibile by 2, 25 by 4, 12 by 8, 6 by 16, 3 by 32 1 by 64 so the power of two in 100! is 97.

similiarly there are

33 div by 3, 11 by 9, 3 by 27 and 1 by 81 making 48 times 3 divides, so i guess

2*3^50 will do

6. Apr 13, 2005

### Gokul43201

Staff Emeritus
I'm not sure I follow the finish...

You've shown that 100! has 2^{97} * 3^{48}. And what happened after that ?

In any case : Next question is yours ...

7. Apr 13, 2005

### matt grime

Duh, do i feel stupid from typing too quickly whilst someone is talking to me. should have been 6.

Willl think of a question later tonight.

Last edited: Apr 13, 2005
8. Apr 13, 2005

### matt grime

Ok, here's one that I hope isn't too tricky.

Let A be an nxn matrix over C, the complexes.

Suppose that Tr(A^r)=0 for all r. Show A is nilpotent.

All the other ones I could think of were too easily looked up.

9. Apr 13, 2005

### CRGreathouse

Quick explanation

$$12^{49}=2^{98}3^{49}$$, so $$100!=2^{97}3^{48}X$$ must be multiplied by $$6=2^{98-97}3^{49-48}$$ before $$12^{49}$$ will divide it.

10. Apr 13, 2005

### Gokul43201

Staff Emeritus
Yes, CRG : matt clarified this in his subsequent post. Thanks !

Back to matt's question :

11. Apr 13, 2005

### matt grime

If there's no progress on this after a couple of days i'll post hints. perhaps people should post what they think they need to do. i like tyhe question cos it uses lots of bits from here there and everywhere.

12. Apr 13, 2005

### snoble

Well it seems that first you need to show that such a matrix does not have n distinct eigenvectors and then show that not having n distinct eigenvectors implies nilpotents.

The first question relates to the fact that if a matrix had a full set of Eigenvectors then diagonalization gives us that $$\sum \lambda_i^n =0$$ for all n>0.

For the second half I would gather you need to consider the vectors not in the span of the Eigenvectors and consider where they may be mapped.

hmmm... that may not be enough for the first half. You may need to also show that any vector that is a eigenvector is in fact in the kernel.

Experimentally it appears like we are dealing with upper or lower triangular matrix if you just assume Tr(A^r) =0 for r=1..n. But that could just be maple not returning all possible answers which it sometimes does.

That's what I've been thinking

13. Apr 13, 2005

### Hurkyl

Staff Emeritus
Hrm.

If a matrix A is diagonalizable, then I claim that there exists an n such that all of the nonzero eigenvalues of An lie in the right half-plane. The requirement that Tr(An) = 0, forces all the eigenvalues to be 0, and thus A is zero... clearly nilpotent!

So the trick, then, is when the matrix is not diagonalizable.

Then again, this only works for complex valued matrices.

Last edited: Apr 13, 2005
14. Apr 14, 2005

### matt grime

I did state the matrix was over C, though this is largely for conveniece.

And, snobel, the matrix 0 has a full set of eigen values and is certainly nilpotent an satisfies the criterion.

15. Apr 14, 2005

### snoble

Oops... of course that is the sole matrix with a full set of 0 eigenvalues and $$\lambda_i=0$$ is the sole set of solutions in C satisfying my condition.

16. Apr 14, 2005

### uart

Sorry but the correct answer is the rational number 12^49 / 100! , it's smaller than the previous answer of 6 by a factor or approx 10^104.

Ok so here's my QA puzzle : Why is it that mathematicians are worse than the general layperson when it comes to not specifying that they require a whole number solution when that is the case. :p

17. Apr 14, 2005

### matt grime

If you're going to take that attitude, there is no answer; think negatives.

18. Apr 14, 2005

### shmoe

Because a mathematician will expect the reader to see the words "divisible" and "number theory" and realize that the interesting solution lies in the whole numbers. In fact if you are throwing rationals into the mix, the concept of "divisibility" collapses into something really dull (as with any field). The overlying assumption we're interested in whole numbers here is just like when you saw "least number" you assumed it had to be positive, 0 works fine and is smaller than yours, so is -6, etc.

19. Apr 14, 2005

### uart

Actually I didn't assume that it had to be positive but I could see that there was no solution if negatives were included so I took the liberty of further constraining the under-specified problem so that it did have a solution.

Anyway don't take that last post too seriously, it was a wind-up and I did really know that he meant positive integer. :)

20. Apr 15, 2005

### snoble

Grimey I see it now.

The trick isn't to diagonalize the matrix the trick is to write it out in the form $$A=PUP^{-1}$$ where U is upper triangular (I'll post how you know you can do this later). Then the Trace of A^r is just the trace of U^r since A is just a conjugate of U. And of course the trace of U^r=0 is the same as $$\sum_i d_i^r=0$$ where $$d_i$$ are the diagonal elements. Then Hurkyl had the right idea when he said you could find an r such that $$d_i^r$$ has positive real part. In fact what you can say is that given any finite set of non-zero complex numbers and any $$\epsilon>0$$ there exists a positive integer r such $$|Arg(d_i^r)| < \epsilon$$. This is either an analysis statement or a number theory statement depending on your interpretation. So U is upper triangular and has a 0 diagonal. There fore $$A^{n-1}=PU^{n-1}P^{-1}=P0P^{-1}=0$$ where A is nxn.

So two big statements still to be proven if anybody wants to do them (upper triangulation and $$|Arg(d_i^r)| < \epsilon$$ or I will write them up when I'm not supposed to be busy working on something else.

Lets call a subset A of [0,1] paired if $$0,1\in A$$ and for every pair $$a,b \in A [tex] such that [tex] a<b$$ and $$b-a \ne 1$$ there exists another pair $$c,d \in A$$ such that $$c<d$$ and $$c\ne a$$ and $$d-c= b-a$$. Sounds very complicated but all it says is if a set is paired then every distance between points, except 1, occurs twice. An example of a paired set is {0,1/5, 2/5, 4/5, 1}. Notice that the difference 1/5 appears 3 times, 2/5 appears twice and 3/5 appears twice but the difference 1 appears only once.

Alright, here's the question. Are there any finite (only contains finitely many elements) paired sets such that there exists an irrational number in the set. Give an example or prove why not.

I will be really impressed if you use Euclidian Geometry.

Regards,
Steven

21. Apr 15, 2005

### matt grime

Firstly, you do not need to "prove " it can be put in upper triangular form, this is Jrodan Canonical form.

Whilst you ceratinly have the right idea, please note GOkul's rule that you must wait for me to say if it is correct before posting a new question. I would like to see you, or anyone else, prove that if d_1,..,d_n is a set of complex numbers such that the sum of all the r'th powers is zero for all r, then all the d_i are zero. Please note this statement is true in any field, not just C. The reason I would like to see it is that I think its proof (or at least the one in my mind) is nice, not because I think it is hard. FOr a hint think Newton.

22. Apr 15, 2005

### snoble

hmm... I can't do it for a field in general but I can do it for the comlex numbers by induction.

I will prove that given a finite set $$d_1, d_2, ...,d_n$$ of non-zero complex numbers and $$\epsilon>0$$ then there exists a positive integer r such that $$|Arg(d_i^r)| <\epsilon$$

Base case
Start with $$d_1 \ne 0$$ and $$\epsilon>0$$. Then take N>0 such that $$2\pi/N < \epsilon$$. Let $$c_r = Arg(d_1^r)$$. So there exists an i,j such that $$1 \le i < j \le N$$ and $$dist(c_i, c_j) < \epsilon$$ (note that the distance function I am using is distance between angles on the unit circle so $$dist(3\pi /4, -3\pi /4) = \pi/2$$) This is easy to see since there are N angles that can be reordered such that $$c_{\sigma (1)}\le c_{\sigma(2)}\le ...\le c_{\sigma(N)}$$ so $$dist(c_{\sigma (1)}, c_{\sigma(2)}) + dist(c_{\sigma (2)}, c_{\sigma(3)}) + \ldots + dist(c_{\sigma (N-1)}, c_{\sigma(N)})+ dist (c_{\sigma (N)}, c_{\sigma(1)}) \le 2\pi$$ so since dist is positive there exists $$dist(c_{\sigma(k)}, c_{\sigma(k+1)}) \le 2\pi/N < \epsilon$$.

So since $$dist(c_i, c_j) <\epsilon$$ then $$dist(c_{i-1}, c_{j-1}) <\epsilon$$ then $$dist(c_{1}, c_{j-i+1}) < \epsilon$$ then $$dist(0, c_{j-i}) < \epsilon$$. So $$r=j-i$$.

Inductive step is straight forward. Given $$d_1,\ldots, d_{n+1} \ne 0$$ and $$\epsilon >0$$, $$1/N < \epsilon$$ take r' such that $$|Arg(d_i^{r'})| < \epsilon/N$$ for $$1 \le i \le n$$. Then as in BaseCase there exists an r, $$1\le r\le N$$ such that $$|Arg( (d_{n+1}^{r'})^r)| < \epsilon$$ and ofcourse $$|Arg(d_i^{r'\cdot r})| < r\cdot\epsilon/N <\epsilon$$

So with the above in mind and given a finite set of complex numbers none equal to zero. Take r $$|Arg(d_i^r)| < \pi/2$$. So $$Re(d_i^r) >0$$ so $$Re(\sum d_i^r) >0$$. Which contradicts $$\sum d_i^r =0$$. There fore if $$\sum d_i^r =0$$ for all r then $$d_1=d_2=\ldots =d_n=0$$

I am curious about how to do this for a general field though. Sorry about jumping the gun posting my question but I really happen to like that question since it doesn't necessarily use the math you expect it would.

23. Apr 15, 2005

### matt grime

You certainly get to ask that as the next question, and I won't post the answer to it, but I'd like to get this one cleared up.

24. Apr 15, 2005

### snoble

Hmm... does the general solution have something to do with taking a field F and n dimensional vector space F^n over F. Then taking the linear functional g: F^n -> F st
g(f1,f2,...,fn) = f1+f2+...+fn. Concluding that the kernel of g must be at most n. But if a1^k + a2^k +... +an^k =0 for all k>1 then (a1^i, a2^i,..., an^i) is orthogonal to (a1^j,...,an^j) using the standard dot product. So either (a1^k,...,an^k) for k from 1 to n is a spanning set or one of the vectors is a 0 vector. If it were a spanning set then the whole space is in the kernel so 1+0+0+...+0 =0 which doesn't make sense. So ai^k =0 for all i. But then ai=0 because a field is a domain.

Can you do that? Can you always use the standard dot product to show orthogonality to show that the set is a spanning set? I think I may have to assume the conclusion to be able to do that.