It seems to me I just did this on a different forum! Did you ask there also?
Strictly speaking, it is not necessarily true- you have to assume nice properties of the coefficients which may be functions of x. In particular,we need the coefficients to be continuous and the leading coefficent not equal to 0. You also need to have the equation "homogeneous"- that is, that there no functions of the independent variable, x, that are not multiplying the dependent variable, y, or derivatives of it.
Assuming that, a first order linear equation can be written dy/dx= f(x, y). As long as f is continuous in x and "Lipschitz" in y ("Lipschitz" is midway between "continuous" and "differentiable" so many texts take "differentiable" as a sufficient condition). Requiring the coefficents to be continuous and the leading coefficient not equal to 0 guarentees that.
Now, it is easy to show that "the set of all solutions to a linear differential equation forms a vector space". To do that, assume f and g are solutions, put y(x)= af(x)+ bg(x), where a and b are numbers, into the equation, then show, using (af+ bg)'= af'+ bg', etc., show that you can separate everything into a( ... terms in f...)+ b(... terms in g...)= a(0)+ b(0)= 0.
Now, as long as the coefficients are as I have said, we can use the "existence and uniqueness" equation to show that there exist a unique solution to the equation satisfying y1(a)= 1 for any specific a. Now, let y be any solution to the first order equation. Let Y= y(a). It is easy to see that the function Yy1(x) satisfies the same differential equation: py'+ qy= p(Yy1')+ q(Yy1)= Y(py1'+ qy1)= Y(0)= 0. But the "uniqueness" theorem then requires that y(x)= Yy1(x).
Similarly with second order differential equations with some variations. Given the second derivative linear, homogeneous d.e. py''+ qy'+ r= 0, with p, q, and r continuous functions, we can let Y1= y', Y2= y and write this as a pair of first order equations. Since Y1= y', y''= Y1' so we have pY1'+ qY+ r= 0 and Y2'= Y1. We can then write that as a "vector" equation, taking our vector as Y= (Y1, Y2) so that our two equations become the single vector equation \frac{dY}{dx}= \frac{\begin{pmatrix}Y1 \\ Y2\end{pmatrix}}{dx}= \begin{pmatrix}p(x) & q(x) \\ 1 & 0\end{pmatrix}\begin{pmatrix}Y1 \\ Y2\end{pmatrix}
One can without too much trouble, cast the "existence and uniqueness" theorem for first order problems into "vector" form- the proof follows the same logic. The main difference is that the "additional condition" you need for uniqueness becomes
Y(0)= \begin{pmatrix}Y1(0) \\ Y2(0)\end{pmatrix}= \begin{pmatrix}y(a) \\ y'(a)\end{pmatrix}= \begin{pmatrix}A \\ B\end{pmatrix}
the standard "initial value" condition for such a problem.
Now, one can show that there exist a unique solution to the differential equation, y1, satisfying y(a)= 1, y'(a)= 0 and another solution, y2, satisfying y(a)= 0, y'(a)= 1.
Finally, given any solution, Y(x) to the differential equation, let A= Y(a) and B= Y'(a) and show that y(x)= Ay1(x)+ By2(x) satisfies the differential equation and y(a)= A, y'(a)= B and so, by the "uniqueness" Y(x)= Ay1(x)+ By2(x). A general extension of that, to n dimensions, shows that the set of all solutions to an nth order linear, homogeneous d.e. for an nth dimensional vector space and so can be written as a linear combination of n linearly independent solutions.