Register to reply 
Random selection, with skewed distribution 
Share this thread: 
#1
Aug1512, 05:28 AM

P: 748

Hi there,
(to mod: not sure where to post this, please move if I've got it wrong) I have a grid of values with 41x161 nodes describing some parameter space. Each node has an associated value, λ, which represents the uncertainty of the parameter choice at that node. I want to make/find an algorithm that samples my grid randomly, because I want to explore the parameter space. However I want the algorithm to sample the 'best' space more often, i.e. the nodes with low λ. Therefore I am looking for an algorithm that is random, but also is more likely to select the best nodes. I have two ideas: 1) Create an array with all nodes, but over represent the best nodes (i.e. put them into the array multiple times). The amount of times a node is put into the array is a function of its 'goodness'. Then randomly sample this array. Good nodes are more likely to be sample simply because they occur most often. 2) Randomly sample a node and then randomly decide whether or not to use that node based on a 'gate keeper' that has a randomly assigned threshold tolerance. Nodes that are good are more likely to pass whether or not the gate keeper has a high or low tolerance, whereas bad nodes will only pass if the gate keeper is feeling generous. Are there any well established, good ways to do this. Does what I'm doing even make sense? 


#2
Aug1512, 05:44 AM

P: 4,573

Hey billiards.
If you want to sample low nodes more frequently maybe you could use your sampler as the inverse of the PDF of your grid: this will sample the low nodes more frequently and the upper nodes less so (i.e. in terms of lambda). You can use a technique in the MarkovChain MonteCarlo family of techniques to simulate arbitrary distributions: so what you would do is use your distribution (i.e. use the inverse of PDF and then normalize it so it's valid even if this is a bin distribution and it's being adjusted) and then use something like MetropolisHastings to take say a regular uniform distribution pseudorandom generator and turn it into a custom pseudorandom generator for your PDF of lambda values. If you have any distribution that corresponds to your sampling distribution, then you can use the MCMC techniques to do that. 


#3
Aug1512, 06:22 AM

P: 748

Thanks TO MOD: How can I change the title of my thread? 


#4
Aug1512, 07:00 AM

P: 4,573

Random selection, with skewed distribution
You then use the techniques of MCMC to basically construct a pseudorandom generator for f(x). Here is a wiki link to one technique: AcceptReject http://en.wikipedia.org/wiki/Rejection_sampling The algorithm is specified below and as long as you have a pseudorandom generator for the uniform distribution, and as long as your pdf meets the requirements, you can construct a procedure to simulate your f(x). The algorithm is only a couple of lines so it's not too bad. There are other methods like this one (MetropolisHastings): http://en.wikipedia.org/wiki/Metropo...ings_algorithm They have their own issues but the best way to find out is to do it empirically: you should implement the code, generate ten thousands samples (or as many as desired) and then plot the histogram. I'd take a look at the AcceptReject method first and test it out. If you're using a bin distribution then you could say calculate 1/lambda(x,y) where lambda(x,y) is the discrete distribution and your maximum value for the corresponding M will be where this is maximized. 


#5
Aug1512, 08:51 AM

Sci Advisor
P: 3,297




#6
Aug1512, 09:51 AM

P: 748

Would it help if I described the physical problem I am trying to solve? I am a seismologist, I am looking at shear wave splitting  which is analagous to birefringence in light waves  the phenemenon whereby a transversely polarised signal splits into two (fast and slow waves) as it passes through an anisotropic medium. Now what I have done is to measure the "splitting parameters" of the medium. These parameters are the angle through which the signal components on your recording device must be rotated such that the waveforms are similar to each component (fast wave on one component, slow wave on the other), and the total delay time between the two waves. The parameter space (my grid) has 181 nodes in one dimension, from 90 to 90 one at every degree, and 41 nodes in the other direction at every 0.1 seconds of delay time. For each node I use the parameters to try to "undo" the splitting on the signal, and then measure how well this works. I end up with a grid of values over the whole parameter space, I then perform an Ftest to contour the grid, and normalise so that a value of 1 corresponds to the 95% confidence interval and larger numbers correspond to lesser confidence intervals. This grid is then a measure of the anisotropy (or at least birefringence) of the medium through which the ray has passed, whereby each point in the parameter space has some "uncertainty" value attached. This grid is then used as the input for the next problem, which is the problem that I am trying to work out:
I need to use this measure of anisotropy as an input to another problem. I want to input the whole grid, but it is too computationally expensive, and so I have use a subset of the grid. I've tried various ways of generating subsets, but there are always drawbacks, so my latest idea is to take a random subset. The drawback of that is that I need to make sure that my "best" parameters are part of that subset. Which is why I need to build in some kind of skew to get my random samples to better represent the parameters with the least uncertainty. Sample in this case means to pick a node. I want to pick nodes at random, but preferentially pick the nodes with the least uncertainty. 


#7
Aug1512, 10:51 AM

Sci Advisor
P: 3,297

Based on experience in answering questions in this forum (and not particularly on your own post), many people use the term "confidence interval" without understanding its technical definition. In particular, many people mistake "confidence" as a synonym for "probability". So they think that saying "a 95% confidence interval for the mean is [1.38, 2.69]" means that there is a 95% chance the mean is in that interval. (i.e. they mistake "confidence interval" for a "credible interval"). In statistics, "tests" are associated with significance levels, not with confidence intervals. Perhaps you are using the F statistic as an estimator of something rather than a test for something. Why does the "the grid" become a measure of something? To say the grid is a "parameter space" suggests that only one node on the grid gives the appropriate parameters for a given situation. Why would results at all the other nodes matter? 


#8
Aug1512, 11:40 AM

P: 748

This paper describes the method: http://advseismo.asu.edu/readings/19...rch_Silver.pdf Read the section on 'Measuring shear wave splitting' to understand the problem, 'Error estimation', and in the appendix 'Calculation of confidence region', will give you more details on the use of the Ftest and how this is used to contour the measurements. It is important to note that I do not have aproblem with any of the above. The problem comes in the extension to the method that I am currently working on. 


#9
Aug1512, 08:23 PM

P: 4,573

I noticed you mentioned this statement:
I'm guessing the problem you have is one of random sampling, but it is a little troubling to see the statement above because this is not how confidence intervals work: they are simply relating probabilities of a simple region to the interval that describes the region. The other thing I want to ask is how you want to weigh your probabilities at each node in the grid. The simplest transformation is the inverse of the probabilities and then normalizing to get a PDF to be used as a sampling distribution, but I have a feeling that you have extra conditions and subtleties that we are missing in the context of your application. With regards to sampling, it's best if you lay out all the conditions for the actual sampling distribution and also for the sampling procedure: your problem is obviously not some quick textbook problem and you can't afford to make major screwups. Please post your comments with regard to firstly what specific properties you want the sampling distribution to have (you said inverse of the weights, but should this inverse be transformed in a certain way? Does it depend on the position of the node? Does it depend on the relationship between nodes?) and also how you are going to sample? (Is it a truly independent random sample or do you other things that make it a nonindependent process)? 


#10
Aug1612, 04:50 AM

P: 748

I'm thinking this might have got a little bogged down.
Let me rephrase the problem. Imagine a string of numbers: 50,2,5,1,7,35,124,34,13,67,234,...... Now I want to randomly pick a subset of these numbers. Easy enough. The twist, I want my subset to be contain more of the lower numbers, and less of the higher numbers. 


#11
Aug1612, 04:53 AM

P: 748

I have an algorithm to do this now. What it does is it picks five (or six, or seven  I haven't decided yet) random numbers, and then keeps the lowest one. Then repeats until I have my subset.
What do you think? 


#12
Aug1612, 05:00 AM

P: 4,573

You can simply create the PDF to shift the probabilities where you want them. Order statistics and distributions deal with minimum and maximums if you are interested. 


#13
Aug1612, 05:13 AM

P: 748

1) sample one and only one node, and make that your "best" node. 2) sample absolutely every node and wait about a year for your calculations to run (no joke). What I want is something in between. To sample a subset of the nodes, but to make that subset focussed around the best nodes. I've tried just using nodes inside the 95% confidence region, but the drawback is that sometimes the 95% confidence region is huge, and sometimes it's tiny. So I could maybe take the 30 best nodes, but it is likely that those nodes would all cluster together, and in the case where I have a large 95% confidence region I want to explore the parameter space a bit more. That is why I like the random approach, because it seems to get around some of the drawbacks with with the other approaches I've tried, but has the major drawback that it won't focus on my "best" parameters  that is, unless I can build in some function that skews the random sampling towards my "best" nodes. I want the algorithm not only to find "good" nodes that are all clustered together, if there are good nodes elsewhere I want the algorithm to be able to find them. 


#14
Aug1612, 02:26 PM

P: 748




#15
Aug1612, 02:49 PM

Mentor
P: 18,299




#16
Aug1612, 04:12 PM

Sci Advisor
P: 3,297

As I see it, your data is a set of N vectors. Each vector is of the form [itex] (\phi_i, \delta^2_i, \lambda_i) [/itex]. The values [itex] \phi, \delta^2 [/itex] are estimates of some physical parameters. The value [itex] \lambda_i [/itex] can be used (somehow!) to define an ellipse around the point [itex] (\phi_i, \delta^2_i) [/itex] that is a 95% "confidence interval". (The common misinterpretation of that interval would be that there a 0.95 probability of the true values of the parameters being in that ellipse. If you take a Bayesian point of view, you can make this interpretation valid. Otherwise, watch your step.) It isn't clear to me whether the collection of data that you have is intended to estimate one correct pair of values [itex] (\phi,\delta^2) [/itex] that applies over the whole earth or whether the analysis you are doing assumes that each estimate is estimating a different true pair of values at different locations on the earth. For some unstated reason, you want to take samples from you data and do something with them. I think want to take more samples of the data points with the smaller [itex] \lambda_i [/itex]. You might also want to sample points from inside the ellipse around each [itex] (\phi_i,\delta^2_i ) [/itex] in which case you woudl be sampling points not actually in the original set of estimates. How you should do this is not clear until you define precisely what you are trying to accomplish. If you are randomly sampling the data, it seems that you must be doing a probabilistic simulation of something. What are you simulating? Are you estimating the statistical variability of some function of the data? Perhaps you are merely trying to estimate one true value [itex] (\phi, \delta^2) [/itex] by combining all the estimates. 


#17
Aug1612, 05:39 PM

P: 748

Now I don't expect anybody to follow my particular physical application. In fact I don't have a problem with that bit  I've already solved it. Now I'm looking for a slightly better way to handle one of the intermediate steps. The problem I have is with the mathematics. What I was hoping for by coming here, was help with the maths. The maths problem I have stated is basically this: Given a set of numbers e.g. 41,41,32,33,24,23,22,21,12,14,24 etc.... etc... Come up with an algorithm that picks out a random subset of those numbers, where the subset is biased towards the low numbers. Now I have an algorithm, but maybe someone has a better one? Surely someone has solved this kind of problem in the past!? 


#18
Aug1612, 09:01 PM

Sci Advisor
P: 3,297




Register to reply 
Related Discussions  
Skewed or Noncentral T Distribution  Set Theory, Logic, Probability, Statistics  1  
Model selection for random effects models  Set Theory, Logic, Probability, Statistics  0  
Distribution of a random variable , pdf vs probability distribution  Set Theory, Logic, Probability, Statistics  1  
Finding the marginal distribution of a random variable w/ a random variable parameter  Calculus & Beyond Homework  0  
Random Selection of genes  Biology  3 