You might recall that one example of vectors are "arrows on the chalkboard." Now, we can talk about adding or subtracting "arrows" and specifying relationships between them. But it turns out that dealing with the arrows directly (via, say, the parallelogram rule) is very cumbersome. So, we are led to introduce a basis that allows us to represent our arrows by lists of numbers like: (1,5) or (-7, 10). Then, solving for relations between the arrows is reduced to the equivalent easier problem of solving algebraic relations between the basis coeffients.
We can generalize the notion of a basis-vector expansion to N dimensions, where we represent vectors as N-tuples of numbers. And from there even to countably infinite dimensions, where we represent our vectors by infinite series of numbers. Now imagine that instead of a countable number of basis elements indexed by some integer n, we have vectors (functions in this case) that require an uncountable number of basis elements (here, basis functions e^{-st}) indexed by some real number, say s.
A Laplace transfom (like a Fourier transform) is such a continuous basis vector expansion. We represent function as superpositions of basis functions ("adding" them together by integrating over them). The expansion coefficients are themselves given by the Laplace Transform function f(s) itself -- for each value of s, you get a number, the expansion coefficient for that value of s.
Just as changing to a basis allowed us to manipulate arrows on the board more easily, which is to say, algebraically in terms of the expansion coefficients, the same is true here. The expansion coeffiecients, i.e., the Laplace transform function, can be manipulated algebraically instead of dealing with the messy relationships between the original functions.
Of course, when we are done, we must convert from the basis representation back to "arrows" or "functions" to see the final result. That is what the inverse Laplace transform does.