You touch many different and interesting concepts here.
First, deterministic means that the outcome of an experiment is fixed before doing the experiment. So even before doing an experiment, there is only one outcome which will happen and (in principle) we can predict this outcome. All of classical physics is deterministic. For example, when I throw a ball, I can (in principle) calculate exactly where it is going to land and how long it is going to take. When I flip a coin, I can calculate (in principle) what side of the coin is going to be up and what side is going to be down. However, the variables involved and the equations involved are so immensly complicated that we can never do these calculations. Furthermore, our measurements can never be done precisely enough to know exactly which state we are in now. This is where probability theory comes in. While flipping a coin, the outcome is predetermined exactly. But the outcome is unkown to us. Probability theory does give us some way of accessing some information about the coin flips.
As another example, the number 0.1234567891011121314151617... is called the Champernowne constant. It is completely determined, it is clear to everybody how exactly this number continues. However, if I ask you for the 1000th digit, then you would have to do a tedious calculation in order to find out. So again, the number is determined, but this determination is unknown to us. We can again use probability theory to study the number. So we can figure out the chance of getting a 1 as 1000th digit. This is also the idea behind pseudo-random number generators.
True random processes are very difficult to generate and do not exist in classical physics. However, they do come up in quantum mechanics. Whether the processes which come up are actually deterministic in some sense is currently unknown. Probability theory is currently the only tool available to study these processes.
You also mention cause. Here you must specify what exactly you mean with cause. The term is rather vague and philosophical. I don't even know if it is a meaningful term in science, but others probably know more.
If we flip a coin 100,000 times and the number of heads match the number of tails 50-50% every time +/- some tiny variation, then how's that random outcome? Wouldn't it be truly random if we could flip 90% heads at one go and then 20% heads in another go just as easy, and just as easily as 40% heads, 1%, or 72%?
Yes, it is truly amazing that most of the processes we encounter satisfy the Law of Large Numbers. That is, if we repeat the experiment, then the averages will converge to a fixed number. This is still random because we have
no way of predicting one specific outcome. Whatever information we have about the probabilities is totally useless in predicting the outcome of next coin toss. This is what it means to be a random outcome. When you have done sufficiently many experiment however, then some order in the chaos does appear. But this only happens when you talk about the outcomes of many experiments at once.
Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment:
"Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits"
This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits.
http://www.math.uah.edu/stat/applets/CauchyExperiment.html
Luckily for us, these types of situations are very rare.