Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Function Notation

  1. Apr 6, 2009 #1
    I have a simple problem here, some confusion on my part.

    I've just started learning about functions and their inverses. We've used traditional f^(-1) to denote the inverse. But recently, after learning real life applications, we learned another form. Say we have h(t) as a function of hour in terms of temperature. The inverse is now t(h) for temperature in terms of hours. My confusion is, h is the name of the function in h(t) but became the input for t(h) and vice versa for t.

    How can h serve both as the name of a function and a numerical variable at the same time?
  2. jcsd
  3. Apr 6, 2009 #2


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    Clever question!

    With a nice bit of abuse of notation! :smile:

    Let us be a bit "anal" about this, and look upon it the folllowing way:

    Consider two sets of numbers, [itex]\mathcal{H},\mathcal{T}[/itex]
    (for concreteness, regard both as some interval)

    For simplicity, we let an arbitrary member of [itex]\mathcal{H}[/itex] be called "h", whereas an arbitrary member in [itex]\mathcal{T}[/itex] be called "t".

    Now, suppose you have a bi-jective mapping from [itex]\mathcal{H}[/itex] to [itex]\mathcal{T}[/itex] that we choose to call T(h).

    Since T(h) is bijective, there exists an bijective mapping from [itex]\mathcal{T}[/itex] to [itex]\mathcal{H}[/tex], H(t), that can be regarded as the INVERSE of T(h), i.e, we will have, for every h in [itex]\mathcal{H}[/tex], H(T(h))=h, and for every t in [itex]\mathcal{T}[/itex], T(H(t))=t.

    Instead of carefully distinguishing between different labels of the sets, the mappings and the elements of the set, we simplify the notation by using h and t both as element designations and as names of mappings, taking due care to the context in which it is framed to avoid misunderstandings..:smile:
  4. Apr 6, 2009 #3
    Just to confirm my understanding.

    Basically the h in h(t) represent a set while the h in t(h) represents a member of that set. For simplicity reasons, we just choose to make them the same.

    Thanks for clearing that up.
  5. Apr 7, 2009 #4
    This might work in physics and engineering, but it's not a very good way to think about it in math.

    When you say something like h(t) = t^2 - 1, you're defining a function called h. After you have stated that, h is a mathematical object, just like 1, pi, or the empty set. The function is NOT h(t), even though it is often written that way. The notation h(t) means that h is evaluated with the input t. You can see the difference clearly by looking at the types involved.

    h: R -> R (h is a function from real numbers to real numbers)
    t: R (t is a real number)
    h(t): R (h(t) is a real number)

    The name of the parameter is meaningless after the definition. You just refer to the function as h. And in the definition, you could have used any name for the parameter. If you let h(t) = t^2 - 1, it's equivalent to h(x) = x^2 - 1.

    There isn't a standard notation for it in mathematics, but in computer science, there is a handy notation for creating functions "literal". What I mean by this is when you define a function using h(t) = t^2 - 1, you really haven't isolated f on the left hand side. You are defining f when it is applied an argument x. But how would you isolate that f on the left hand side of the equation? Sometimes we use notation that looks like this:

    f = t -> t^2 - 1

    The "t ->" part indicates we are creating a function. This is similar to how we use { ... } to create a set. The "t" on the left hand side gives the name of the argument, which exists within the body of the function. This "t" variable now works very similarly to how the i, j, or k work in summation notation or dx, dy, dz work in an integral. They are dummy variables and you can change them at a whim as long as you change them everywhere inside.

    Anyway, that's my little notation rant. I think it's nice and neat.

    The f^-1 notation isn't the greatest, but it works very well when you think of functions as a group under composition. We notate function composition as if it were multiplication.... so fg = f º g, and (fg)(x) = f(g(x). When we do this, the f^-1 notation makes perfect sense, because f * f^-1 = f / f = 1 (where 1 is the identity function and the identity element of the group).
  6. Apr 7, 2009 #5
    Also, it's not right to think of h as a "set". It's a function. It's a different kind of creature entirely.

    Set theoretically, you can model functions in terms of sets.... a function is a set of pairs (x, y) where the x's are unique in the set, blah blah blah. But a function ISN'T just a set. For one thing, you can just as easily model sets in terms of functions. But these kinds of constructions are just to show that our intuitive notion of the concept of a function is well defined and valid in a formal setting.
  7. Apr 7, 2009 #6


    User Avatar
    Staff Emeritus
    Science Advisor

    It doesn't. h is a function, h(t) is a number. "h" refers to the function, "h(t)" refers to the specific value h has at a specific value of t.

    You can think of a function as a set of ordered pairs: h= {t, h(t)}. That is, every member of h is a pair, (t, h(t)) where t is a value of the independent variable and h(t) is the value h assigns to that t.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Function Notation