The subtle point about a "confidence interval" that is being discussed highlights the important and subtle distinction between assuming a quantity has a "fixed but unknown value" versus assuming it is a random variable with an associated probability distribution.
For example, suppose we have 10 empty boxes that are distinguished by labels "1", "2",..."10". If we are only given the information that one of the boxes was opened and a ball was placed inside it, we cannot prove statements like "There is probability of 1/10 that the ball is in box "2"". No probabilistic procedure for placing the ball inside the box was specified. For example, we can't rule out the possibility that the ball was put in a box by someone who had a preference for the number "7".
In a practical treatment of the the above example, it is common to take a Bayesian approach by assuming that there is a probability of 1/10 of the ball being placed in a given box. The interesting justification for this assumption is that a uniform distribution accurately portrays our knowledge of how the ball was placed. (Our "knowledge" , in this case happens to be total ignorance!)
The definition of "confidence interval" in frequentist statistics (the kind commonly taught in introductory statistics courses) takes the view that the parameter to be estimated has a "fixed, but unknown" value. There are two distinct types of "confidence interval" discussed in frequentist statistics -formally defined confidence intervals versus intervals that are informally called "confidence intervals".
The formal definition of a confidence interval doesn't require specifying enough information to give the interval any numerical endpoints. Such intervals are usually stated using a variable representing the "fixed but known parameter" . For example, ##(\mu - 20.3, \mu+ 20.3)## is such an interval. The numerical endpoints of this confidence interval can't be known unless we are given the value of the ##\mu##, which is the "fixed, but unknown parameter". ( If we were given that value, then ##\mu## would no longer be unknown and there would be no point in creating confidence intervals).
The informal type of "confidence interval" is the kind where definite numerical endpoints are obtained by using an estimated value of the "fixed, but unknown parameter". For example, if a sample mean is ##\hat{\mu} = 9.0 ## then an informal confidence interval might be ##( 9.0 - 20.3, 9.0 + 20.3)##. However, even if this is a "95% confidence interval", it cannot be proven that there is a 95% probability that the "fixed, but unknown" value of ##\mu## is within this interval. The assumption "fixed, but unknown" does not provide any information about a random process that was used in setting the "fixed, but unknown value". For example, maybe the population was generated by a person who liked to have ##\mu##'s greater than 30.
The informal Bayesian approach to such an informal confidence interval is simply to believe that there is a 95% probability that the value of ##\mu## is within (9.0 - 20.3, 9.0 + 20.3).
The rigorous Bayesian approach is to make the assumption that ##\mu## was selected as a random variable having some particular probability distribution. It is possible to use that information and the values in the sample to compute the probability that ##\mu## is within (9.0 - 20.3, 9.0 + 20.3). (In this approach, some people use the terminology "credible interval" instead of a "confidence interval" ) However, the calculated probability depends on what particular distribution is assumed for ##\mu##.
A further point about (formal) confidence intervals. There is nothing in the definition of a "confidence interval" that specifies that the procedure for generating confidence intervals must make sense! The scenario for "confidence intervals" is that you have some function ##f(S)## that maps each possible sample ##S## to some interval. As we, know, mathematical functions are permitted to do strange things. For example ##f## might be a constant function that mapped all samples to the interval (-1,1). Or ##f## might be a a function with eccentric rules - like "Map ##S## to (-1,1) if the sample contains the value 7.36, otherwise map ##S## to the interval (-2, 19)".
Of course the commonly encountered kind of confidence intervals are those that make the interval a function of both the sample data and the "fixed, but unknown" parameter of interest.