Linear Functions: Showing Properties for Multi-Vars

  • Thread starter Thread starter WittyName
  • Start date Start date
  • Tags Tags
    Functions Linear
WittyName
Messages
6
Reaction score
0
Linear functions are functions which satisfy the two properties:
f(x+y)=f(x)+f(y) \\<br /> f(a*x)=a*f(x)

I was wondering how would you show this property was true for multi-variable functions e.g. f(x,y,z). Would it suffice to show f(x_{1}+x_{2},y,z)=f(x_1,y,z)+f(x_2,y,z) \\ f(a*x,y,z)=a*f(x,y,z)? Basically fix all other variables and show the properties are true for one variable, then repeat for the next variable different to the one we chose before.

Or would you have to consider something like f(x_{1}+x_{2},y_{1}+y_{2},z_1+z_2) \ \text{and} \ f(a*x,a*y,a*z)?
 
Physics news on Phys.org
Hey WittyName and welcome to the forums.

Basically for multi-variable functions, your function is a matrix applied to a vector and if you can show that such a matrix exists that defines your function, then it's essentially linear.

what you basically do is treat your x as a vector (typically a column vector) and then show that a matrix exists to define your function.

The linearity works because of the nature of matrix multiplication and the properties of multiplying matrices by scalars as well as the distributivity of addition with multiplication where (X+Y)Z = XZ + YZ if all of these are matrices and have the right definitions (i.e. dimension wise).
 
chiro said:
Hey WittyName and welcome to the forums.

Basically for multi-variable functions, your function is a matrix applied to a vector and if you can show that such a matrix exists that defines your function, then it's essentially linear.

what you basically do is treat your x as a vector (typically a column vector) and then show that a matrix exists to define your function.

The linearity works because of the nature of matrix multiplication and the properties of multiplying matrices by scalars as well as the distributivity of addition with multiplication where (X+Y)Z = XZ + YZ if all of these are matrices and have the right definitions (i.e. dimension wise).

Thanks for the reply.

Would you know of any sites where I can read more on this?
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top