It's not that the coefficients are the same as their conjugates, its whether one state times the other is nonzero or not.
I'm going to change your notation a bit to make it easier to demonstrate. Say that instead of states like |000\rangle, |101\rangle, etc. we have a set of n states |\Psi_0\rangle, |\Psi_1\rangle, etc. In general, a state will be some linear combination of these states: \Psi = a_0|\Psi_0\rangle + a_1|\Psi_1\rangle + .... We can write this compactly as \Psi = \sum_{i=0}^n{a_i|\Psi_i\rangle}. Now, let's try to find the norm of this state.
\langle\Psi|\Psi\rangle = (\sum_i{a_i^*\langle\Psi_i|})(\sum_j{a_j|\Psi_j\rangle})
= \sum_i\sum_j{(a_i^*\langle\Psi_i|)(a_j|\Psi_j\rangle)}
= \sum_i\sum_j{a_i^*a_j\langle\Psi_i|\Psi_j\rangle}
This is just a general way of saying what you had before. You can see that if you have n states, there will be n^2 terms in this expansion, because of the double summation. In general, all of those terms could be nonzero, in which case you have a whole lot of terms on your hands. For a system with 1,000 states, for instance, you're going to have to keep track of 1,000,000 terms.

indeed!
However, in many cases, the set of states you're working with are of a special kind, called an orthonormal set. This means that for any two terms, \langle\Psi_i| and |\Psi_j\rangle, their product \langle\Psi_i|\Psi_j\rangle is 1 if i=j, and 0 if it isn't. We can write this compactly using a symbol called the Kronecker Delta, \langle\Psi_i|\Psi_j\rangle = \delta^i_j. This symbol is simply defined to be 1 if i=j, and 0 if not. If you have a set of orthonormal states, things get a lot easier. Let's substitute this into our equation and see what happens:
\langle\Psi|\Psi\rangle = \sum_i\sum_j{a_i^*a_j\langle\Psi_i|\Psi_j\rangle} = \sum_i\sum_j{a_i^*a_j\delta^i_j.
Now we made all the bras and kets go away, and it's just an equation with numbers. Furthermore, let's examine a few terms of this sum. If i=7 and j=25, then we have a_{7}^*a_{25}\delta^{7}_{25}. However, \delta^{7}_{25} = 0, so we know this term will vanish. If i=10 and j=10, then we have a_{10}^*a_{10}\delta_{10}^{10}. But \delta_{10}^{10}=1, so this is just a_{10}^*a_{10}. In general, of the n^2 terms in our expansion, only the n terms with equal indices will actually matter. All the other ones (called the cross terms) vanish. So really, we could write our double sum as a single sum:
\sum_i\sum_j{a_i^*a_j\delta^i_j = \sum_i{a_i^*a_i}
Which is a lot easier to solve. Whenever you see a delta in an equation, it will eventually allow you to do this--it eats a summation, and substitutes the variable on that summation with the other variable on the delta. This is an enormous help because it cuts down the number of terms dramatically. This is a very common operation in quantum mechanics, so it's important to become familiar with it if you aren't already.
So if your states are orthonormal, their product will be a delta function, and you can do the above trick. If they're not, then all of the terms will contribute, and you've got a big mess. So it pays to have an orthonormal set of base states. Fortunately, most of the sets of states you come across are orthonormal (an electron spin-up state is 100% equal to an electron spin-up state, and 0% equal to an electron spin-down state, for instance) so you can often use this fact to make your life a lot easier.