Homogenous Polynomials and Algebraically Independent Sets

Click For Summary
The discussion focuses on the proof that a polynomial f in n-variables is homogeneous of degree d if and only if f(ut1, ut2,..., utn) = u^d * f(t1, t2,..., tn) for every set of n+1 algebraically independent variables over a commutative ring A. The necessity of algebraic independence is highlighted to prevent self-annihilation of quantities, ensuring that coefficients can be equated without zero divisors interfering. The proof demonstrates that if f is homogeneous, the equation holds, and conversely, algebraic independence is required to conclude that coefficients must match across the equation. This independence guarantees the conditions needed to avoid algebraic dependence, which could invalidate the proof. The discussion emphasizes the importance of algebraic independence in establishing the properties of homogeneous polynomials.
shaggymoods
Messages
26
Reaction score
0
So I am going through Serge Lang's Algebra and he left a proof as an exercise, and I simply can't figure it out... I was wondering if someone could point me in the right direction:

If f is a polynomial in n-variables over a commutative ring A, then f is homogeneous of degree d if and only if for every set {u, t1,t2,...tn} of (n+1) algebraically independent variables over A we have f(ut1, ut2,...utn) = u^d*f(t1,t2,...tn).

The <= implication seems easy enough, although I don't see why we'd need algebraic independence; however, the other way has me tripped up - I just learned about algebraic independence so I'm rough around the edges. Thanks for any help!
 
Physics news on Phys.org
The algebraic independence is needed to avoid that the quantities do not annihilate themselves. E.g. is ##f(x,y)=x^2y+x^2y \in \mathbb{Z}_2[x,y]## identical zero or homogeneous of degree ##3\,?##

A polynomial in ##n## variables looks like ##f(t_1,\ldots,t_n)=\sum_{k_1,\ldots,k_n} a_{k_1\ldots k_n}t_1^{k_1}\cdot \ldots \cdot t_n^{k_n}## so
$$
f(ut_1,\ldots,ut_n) =\sum_{k_1,\ldots,k_n} a_{k_1\ldots k_n}\cdot u^{k_1+\ldots+k_n} \cdot t_1^{k_1}\cdot \ldots \cdot t_n^{k_n}
$$
If ##f(t_1,\ldots,t_n)## is homogeneous of degree ##d##, then ##f(t_1,\ldots,t_n)=\sum_{k_1+\ldots+k_n=d} a_{k_1\ldots k_n}t_1^{k_1}\cdot \ldots \cdot t_n^{k_n}## and ##f(ut_1,\ldots,ut_n) = u^df(t_1,\ldots,t_n)## clearly holds.

From ##f(ut_1,\ldots,ut_n) = u^df(t_1,\ldots,t_n)## we get for the other direction of the proof
$$
f(t_1,\ldots,t_n)=\sum_{k_1,\ldots,k_n} a_{k_1\ldots k_n}t_1^{k_1}\cdot \ldots \cdot t_n^{k_n} = \sum_{k_1,\ldots,k_n} a_{k_1\ldots k_n}\cdot u^{k_1+\ldots+k_n-d} \cdot t_1^{k_1}\cdot \ldots \cdot t_n^{k_n}
$$
Now we need algebraic independence if we want to conclude that this equation must hold on the coefficient level, i.e. that we have ##a_{k_1\ldots k_n}=a_{k_1\ldots k_n}\cdot u^{k_1+\ldots+k_n-d}## for every index tuple ##(k_1,\ldots,k_n)##. This is the same as we use linearly independence for vector space bases. In the next step we write ##0=a_{k_1\ldots k_n} \cdot (1-u^{k_1+\ldots+k_n-d})##, hence we need either an integral domain ##A## or at least that the ##a_{k_1\ldots k_n}## and ##u## are no zero divisors, because we want to conclude ##1= u^{k_1+\ldots+k_n-d}## and from there ##k_1+\ldots+k_n=d## for all index tuples ##(k_1,\ldots,k_n).##

Algebraic independence is a short way to guarantee these conditions on zero divisors, because if we had an equation ##0=a_{k_1\ldots k_n} \cdot (1-u^{k_1+\ldots+k_n-d})## with ##0\neq a_{k_1\ldots k_n} ## and ##u^{k_1+\ldots+k_n-d}\neq 1## we would also get an algebraic dependence. As we need algebraic independence anyway, this is the minimal list of necessary conditions. Usually authors demand an integral domain, which is sufficient, but not necessary.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 24 ·
Replies
24
Views
835
  • · Replies 5 ·
Replies
5
Views
922
  • · Replies 3 ·
Replies
3
Views
854
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K