I don't think the previous replies really answered the question. The reason for that is that the answer to the question is fairly complicated and is something not usually dealt with in calculus books. So my answer will be fairly high-level and I might use things that will not be immediately understandably to the typical calculus student. I do encourage you to ask questions about this in order to understand it.
First, a short answer. The short answer is that \int f(x)dx is not what you think it is, in particular: it is not a function, but it is a collection of functions. Also, the C in F(x)+C is not a constant (unlike what many calculus books claim). And lastly, it is not true that 0\cdot (F(x) + C) = 0.
In the following, a function f:D\rightarrow \mathbb{R} will always have D open and will always be differentiable.
Let us first define an equivalence relation on the set of all differentiable functions. So, let f:D\rightarrow \mathbb{R} and g:D^\prime\rightarrow \mathbb{R} be differentiable functions. We define f and g to be equivalent and we denote f\sim g if D=D^\prime and if f^\prime(x)=g^\prime(x) for all x\in D.
Now we can define equivalence classes. Given a function f:D\rightarrow \mathbb{R}, we define the set
[f] = \{g:D^\prime \rightarrow \mathbb{R}~\vert~f\sim g\}
So, ##[f]## is the set of all functions whose derivative equals ##f##.
In particular, we can take the function ##0:D\rightarrow \mathbb{R}:x\rightarrow 0##, and we can look at ##[0]##, the set of all functions whose derivative is ##0##.
Misconception: The set of all functions whose derivative is ##0## are exactly the constant functions
This is not true. The validity of this statement depends crucially on the domain ##D## of ##0##. If ##D=\mathbb{R}## or if ##D=(a,b)##, then it is indeed true that a function ##f:D\rightarrow \mathbb{R}## has ##f^\prime =0## if and only if ##f(x)=C## for all ##x##. But for other sets ##D##, this fails.
For example, take ##D=\mathbb{R}\setminus \{0\}##, then the function
##f(x)=\left\{\begin{array}{cc} 2 & \text{if}~x<0\\ 3 & \text{if}~x>0 \end{array}\right.##
satisfies ##f^\prime(x)=0## for all ##x\in D##. And the function ##f## is of course not constant!
Let's fix a domain ##D## for the sequel (so all functions in the future will have the same domain). We can define an addition and multiplication on the equivalence classes.
So if ##f,g:D\rightarrow \mathbb{R}##, then we define ##[f]+[g] := [f+g]## and ##[f]\cdot [g]=[f\cdot g]##.
Also, if ##\alpha\in \mathbb{R}##, then we define ##\alpha [f] = [\alpha f]##.
Now, let ##0:D\rightarrow \mathbb{R}##. We define ##C## to be ##C:= [0]##. So ##C## is the set of all functions whose derivative is ##0##. Now, it turns out that for any ##f:D\rightarrow \mathbb{R}##, that
[f] = \{f+g ~\vert~g\in C\}
and that ##[f] = [g]## if and only if ##f-g\in C##.
Now we are going to commit an abuse of notation, but one which is standard in mathematics. We are going to write ##[f] = f + C##. I wish to stress that ##f+C## is defined exactly as ##[f]## and is not formally "adding a function and a constant".
The operations can be recast in the new notation as
(f+C) + (g+C) = [f] + [g] = [f+g] = (f+g) + C
(f+C)\cdot (g+C) = [f]\cdot [g] = [f\cdot g] = (f\cdot g) + C
\alpha (f+ C) = \alpha [f] = [\alpha f] = \alpha f + C
So in particular, we see that 0(f+C) = 0 + C. Writing something like 0(f+C) = 0 makes no sense at all since the LHS is a set of functions and the RHS is just one function.
Now we can define the indefinite integral. Given a function f:D\rightarrow \mathbb{R}. We define
\int f(x) dx := [F]~\text{where}~ F:D\rightarrow \mathbb{R}~ \text{is a function such that}~ F^\prime(x) = f(x)~ \text{for all} ~ x\in D
So we see that ##\int f(x)dx## is not a function, but rather a set of functions!
With our abuse of notation, we write
\int f(x)dx = F + C.
Now, we have the rule
\int \alpha f(x)dx = \alpha \int f(x)dx
But the multiplication in the RHS is not just multiplying a function by a constant, rather it is the multiplication that we have defined as ##\alpha (f+C ) =\alpha f + C##.
To get back to your example, we indeed have that
\int 0dx = 0\int 1dx = 0\cdot (x + C)
But this is not equal to ##0##. We have to follow our multiplication rule:
\int 0dx = 0\int 1dx = 0\cdot (x + C) = 0\cdot x + C = 0 + C
And as we see, there are no contradictions here!