What's the point of using logarithms when sketching the inverse of a function?

maki1995
Messages
2
Reaction score
0
What's the point of logarithms when trying to sketch functions? Isn't y=3^x the same as x=3^y? I think it should be the same, but I get different results for each method. If it is the same, what's the point of y=log3x? It's confusing and I don't get the same results when trying to find the points in the graph. It makes sense when you try to solve an equation with different bases, but what's the point of logarithms when sketching graphs?
For example, let's say you have y=3^x. The inverse function would be y=log3x or x=3^y. I'm trying to find the points for the graph of the inverse function. Let's say y=2. 2=log3x which means x would be 4.19. Now, for the other equation, if x=3^2, x would be 9, which doesn't make sense.
 
Mathematics news on Phys.org
maki1995 said:
Isn't y=3^x the same as x=3^y?

No it's not. The inverse function of an exponential function is a log function. That's the "point" of logs.

Saying y=3^x the same as x=3^y as just as wrong as saying y = x^3 is the same as x = y^3.
 
When you plot a function y =vs. x, there is a distinction between y and x: x is typically the "independent variable" and y is the "dependent variable". They are called this because we pick x to be whatever we want, but there is some relation y = f(x) that determines y - hence y depends on x, but x is independent of y. Now, the labels are aribitrary - you could be y to be the independent variable and x to be the dependent variable, as you will see in a moment.

When you write "y = 3^x", x is independent and y is dependent. So, we can pick x and find out y. But, suppose we wanted to pick y, and figure out what x needs to be to get that particular y we want. What that means is we want to find the function x = g(y), such that 3^x = 3^g(y) = y, where y is now the independent variable and x is the dependent variable. However, the labels are arbitrary, and people usually like x to be dependent and y to be independent, so we switch the labels x and y to get 3^y = 3^g(x) = x.

The result is that when you write y = 3^x, x is independent and y is dependent. When you are talking about finding the inverse relation to the function and you write x = 3^y, x is still independent and y is dependent, so in the two cases y actually represents two different functions. In the first case, y represents the function y = f(x) = 3^x. In the second case, y represents the function y = g(x) = log_3(x).

Does that make things clearer, or should I try to explain it again?
 
The point of inverse functions is that if you have y as some function of x (i.e., y = f(x)), the inverse allows you to write x as a different function of y.

For example, if y = 10x, if you're given a value of x, you can compute the associated value of y fairly easily. However, if you're given a value of y, it's not so easy to find the associated x value, unless you know that the inverse of the 10x function is the log10 function.

More precisely, the equation y = 10x is equivalent to x = log10(y). These two equations have exactly the same graph, meaning that any ordered pair (x, y) that satisfies one equation, also satisfies the other.
 
Another explanation similar to what Mark44 said, if you have a set of two-dimensional data points which may fit y=10x, and you know this is a function, then in case its inverse is also a function (which certainly in that form, it is), then you can use x as a function of y, and show x=log10(y). You might choose this if you take y as the independent variable and x as the DEPENDENT variable. In fact, you might choose variable names other than "x" and "y". Maybe pairs, (s, t), or (q, r), or (n, p), ... whatever.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top