A function is a triple ##(A,B,\Gamma)## such that
- A is a set called the domain,
- B is a set called the co-domain,
- ##\Gamma \subset A \times B## is a set called the graph,
- for all ##x \in A## there exists ##(x, y) \in \Gamma## ("every function is defined on its domain"),
- if ##(x_1, y) \in \Gamma## and ##(x_2, y) \in \Gamma## then ##x_1 = x_2## ("the vertical line test").
Notations such as ##f(x)## are attempts at defining functions without writing the whole thing down. They are just shortcuts. Different books will have different conventions.
Now as to the question concerning ##f## vs ##f(x)##. When I was much younger I came across an abstract algebra textbook focusing on ring theory dating back to the early 1930s. It used the ##xf## notation: that is ##x## is a point in the domain and ##f## is the map (homomorphism). The clear advantage is composition. ##xfg## is apply f then g. Today we have to write ##g\circ f##. This is great for algebra, horrible for applied analysis. What is ##2f##? Twice the functions? Or the function evaluated at 2? To top it off, as matrices became more standard, we have abandoned ##xf## notation for ##f(x)##, but the split between "f" and "x" stayed in pure mathematics circles. So as a consequence pure mathematicians consider the function to be "f" and the valued to be "f(x)".
But I can't find any old applied mathematics papers that doesn't use "f(x)" is the function. For an applied mathematician what a function does
at each point is of prime importance. It's my belief that applied mathematics has used "f(x)" before pure mathematics has used "f" as the function.