I Why does the author use the notation ##f_c(d)## instead of ##f(c,d)##?

  • I
  • Thread starter Thread starter member 587159
  • Start date Start date
  • Tags Tags
    Addition Notation
member 587159
Hello everyone. I have read a proof but I have a question concerning the notation. To give some context, I will write down this proof as written in the book.

Theorem: There is a unique binary operation ##+: \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{N}## that satisfies the following two properties for all ##n,m \in \mathbb{N}##
1) n + 1 = s(n)
2) n + s(m) = s(n + m)

(s is the successor function as described in the Peano Postulates)

Proof: Uniqueness: I'm going to skip this here as it is bot important for my question.

Existence:

For ##p \in \mathbb{N}##, we can apply the recursiob theorem to the set ##\mathbb{N}##, the element ##s(p) \in \mathbb{N}## and the function ##s: \mathbb{N} \rightarrow \mathbb{N}## to deduce that there is a unique function ##f_p: \mathbb{N} \rightarrow \mathbb{N}## such that ##f_p(1) = s(p)## and ##f_p \circ s = s \circ f_p##. Let ##+: \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{N}## be defined by ##c + d = f_c(d)## for all ##(c,d) \in \mathbb{N} \times \mathbb{N}##. Let ##n,m \in \mathbb{N}##. Then ##n + 1 = f_n(1) = s(n)##, which is part 1) and ##n + s(m) = f_n(s(m)) = s(f_n(m)) = s(n + m)##, which is part 2).

Now, here comes this silly question. Why does the author use the notation ##f_c(d)##? It seems that he's 'hiding' that ##f_c## depends on 2 variables ##c,d## instead of 1. Although I do understand the proof, I feel uncomfortable with this notation.

Thanks in advance
 
Last edited by a moderator:
Mathematics news on Phys.org
He wants to consider c as an identifying parameter while d is the variable, in order to be consistent with the definition earlier of ##f_p## ... ##f_c## satisfies the same definition with ##p=c##.

It can be valid to use f(p,d) instead, with the modified notation in the defnition.

Consider the analogous situation:
##f(x)=\sum_{n=0}^N a_ng_n(x)## vs ##f(x)=\sum_{n=0}^N a(n)g(n,x)## ...
 
  • Like
Likes member 587159
Simon Bridge said:
He wants to consider c as an identifying parameter while d is the variable, in order to be consistent with the definition earlier of ##f_p## ... ##f_c## satisfies the same definition with ##p=c##.

It can be valid to use f(p,d) instead, with the modified notation in the defnition.

Consider the analogous situation:
##f(x)=\sum_{n=0}^N a_ng_n(x)## vs ##f(x)=\sum_{n=0}^N a(n)g(n,x)## ...

Thanks. But if I would write ##f(p,d)## instead, this would indicate that ##f## has as domain ##\mathbb{N} \times \mathbb{N}## where the domain is in fact ##\mathbb{N}##, wouldn't it?
 
Well, by that argument, ##g_n(x)##, in the analogy, has domain ##\mathbb N \times \mathbb R## right?
Are you unhappy with the subscript notation there too?

Consider the set of polynomials ... if y is a polynomial of degree in in x, then we can write ##y = p_n(x)## right?
But p still maps one dimension onto one dimension even though I need another number to specify the degree.

An example in physics would be the single atomic state wavefunction, which would be: ##\Psi_{nlms}(x,y,z,t)## ... so now we have 8 variables, four of them are subscripts specifying the state and four are arguments. What do we gain from writing ##\Psi(n,l,m,s,x,y,z,t)##?

An advantage of using the subscript notation over including it as an argument of the function is that you can talk about ##g_n## (etc) as a particular function, and discussing properties of, without referring to the argument explicitly. This is, in fact, what the author does.

Perhaps it would help to think of ##f_p(s)## as holding the value of p constant and varying s - but, at the same time, recognising the c may take more than that one value in general. This is an implication that notation ##f(p,s)## does not provide. Note: if p is a parameter rather than an argument, then the domain is still 1D.

The author's use is proper and reasonable and consistent and to the purpose of the proof.
What is the problem?
 
Simon Bridge said:
Well, by that argument, ##g_n(x)##, in the analogy, has domain ##\mathbb N \times \mathbb R## right?
Are you unhappy with the subscript notation there too?

Consider the set of polynomials ... if y is a polynomial of degree in in x, then we can write ##y = p_n(x)## right?
But p still maps one dimension onto one dimension even though I need another number to specify the degree.

An example in physics would be the single atomic state wavefunction, which would be: ##\Psi_{nlms}(x,y,z,t)## ... so now we have 8 variables, four of them are subscripts specifying the state and four are arguments. What do we gain from writing ##\Psi(n,l,m,s,x,y,z,t)##?

An advantage of using the subscript notation over including it as an argument of the function is that you can talk about ##g_n## (etc) as a particular function, and discussing properties of, without referring to the argument explicitly. This is, in fact, what the author does.

Perhaps it would help to think of ##f_p(s)## as holding the value of p constant and varying s - but, at the same time, recognising the c may take more than that one value in general. This is an implication that notation ##f(p,s)## does not provide. Note: if p is a parameter rather than an argument, then the domain is still 1D.

The author's use is proper and reasonable and consistent and to the purpose of the proof.
What is the problem?

I did not cover the thing with the sums, yet, but your explanation here helped a lot and now I understand it. I just ended high school and I am not familiar with such notations so most likely that's where the confusion started. Once I will start at the university, I will get used to it. Now, there is not a problem anymore. Thank you for helping me out.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top