Proving the Theorem: (b,c)=1 implies (a,bc)=(a,b)(a,c) in Number Theory

tennesseewiz
Messages
21
Reaction score
0

Homework Statement


Let a,b,c be integers. If (b,c)=1, then (a,bc)=(a,b)(a,c)

Homework Equations


This is difficult to answer because some theorems that we haven't proven yet, we can't use.

The Attempt at a Solution


Let g=(a,b) and h=(a,c), g and h are integers.
That means g|a and g|b, h|a and h|c.
That means:
a=gk, for some integer k
b=gq, for some integer q
a=hp for some integer p
c=ht for some integer t.

Multiply b and c together to get bc=(a,b)(a,c)qt. Then (a,bc)= (gk, (a,b)(a,c)qt), but that's as far as I got.
 
Physics news on Phys.org
Sometimes it is just easier to think what things mean. Suppose that d divides a and bc. Think about the prime factors of d. Can you split them into two types? (Or is uniqueness of prime factorisation something you can't use?)
 
Yeah, we can use uniqueness of prime factorization. We proved it last month, so that's why we can use it. I don't know how to do correct mathematical notation on the computer here, so hopefully you'll understand what I'm typing.

So d=(a,bc). d=(p1^r1)(p2^r2)(p3^r3)... or we can say d=(2^r2)(3^r3)(5^r5)... But I don't see how that's going to help me.
 
So you know that gcd(a,b) is the product of the primes that divide a and b (with the right powers), similarly for gcd(a,c). And b and c are coprime so no primes divide both of them. Do you see where this is going?
 
Theorem: If (b,c)=1, then (a,bc)=(a,b)(a,c).

Let g=(a,b) and h=(a,c), g and h are integers.
That means g|a and g|b, h|a and h|c.
Therefore:
b=qg, for some integer q.
c=th for some integer t.
g|b and h|c, and (b,c)=1, so (g,h)=1.
Therefore a=pgh, for some integer p.
(a,b)=g => (a,b)/g=1 => (pgh,qg)/g=1 => g(ph,q)/g=1 => (ph,q)=1 => (p,q)=1
(a,c)=h => (a,c)/h=1 => (pgh,th)/h=1 => h(pg,t)/h=1 => (pg,t)=1 => (p,t)=1
Therefore (p,qt)=1
(a,bc)=(pgh,qgth)=gh(p,qt)=gh*1=gh=(a,b)(a,c)
Therefore (a,bc)=(a,b)(a,c)
Q.E.D.

I've never done work like this before, so I'm sure I have several form issues, but I went ahead and showed as much of my thought process as I could in hopes that you understand. To the best of my knowledge, this proves the theorem, but if I took any liberties, please, feel free to correct me.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top