You may always build a vector from the origin ##(0,0)## to some fixed points ##(x_0,f(x_0))## and ##(x_0,g(x_0))## and ask whether these two specific vectors are linearly independent. Or for which ##x_0## they are linearly independent or for which they are not. However, this is not the linear dependency of the functions as a whole. The only point where "for all ##x##" comes in is, when we define equalities for functions and equations of functions ##f=g ⇔ f(x) = g(x) ∀ x##.
And you can also define "dependent" in another way. E.g. sometimes it is said that ##x## is the dependent variable or ##f## depends on ##x##.
Linearity, however, has always the form ##(af + bg)(x) = af(x) + bg(x)##, and yes, for all ##x## simultaneously.
You may of course vary where the ##a,b## are from, from ##\mathbb{R}## or ##\mathbb{R}[x]## or even where the ##x## may be taken from, e.g. from ##\mathbb{R}## or from ##[0,1]##. But linearity of functions refers to the addition ##f+g## and the "stretching" ##af##. "stretching" is called scalar multiplication and ##a## the scalar. (You scale the function values.)
Don't confuse this please with linear functions ##φ##. They are a special type of functions where the linearity holds for the argument, the variable: ##φ(ax_1 + bx_2) = aφ(x_1) + bφ(x_2)##.
But the cosine function in your example is not linear.