When will ball 25 drop? Predicting future observations using data

  • B
  • Thread starter DaveC426913
  • Start date
  • Tags
    Data
In summary, the conversation discusses predicting future observations based on a collection of data containing observations over time. The example of billiard balls dropping into a pocket is used to demonstrate the concept. The speaker is looking for an algorithm rather than a formula to make the prediction, and there is a discussion about the accuracy of different methods. They also discuss applying this concept to predicting the observation of license plates and the factors that may affect the accuracy of the prediction.
  • #1
DaveC426913
Gold Member
22,497
6,168
I've got a collection of data that contains observations over time. I want to predict when a given future observation is likely to occur.

As a simple example: Say I'm watching billiard balls drop into a pocket. The billiard balls drop in with approximate regularity.

My dataset:
1 ball 0:00
2 ball 0:57
3 ball 2:02
4 ball 2:48
5 ball (not observed)
6 ball (not observed)
7 ball 5:55
8 ball (not observed)
9 ball 7:40
...
n ball ?

Understand that when the 9 ball drops at 7:40, I know it is the 9 ball. This means I know the 8 ball (and 5 and 6) went in the pocket, even though I didn't observe it and don't know when it did so.

These observations are ongoing, so, after each ball, I will update the prediction, getting ever more accurate as I get more data.

I want to create an algorithm that will predict when ball n (say, ball 25) is expected to drop.

BTW, I am better at programming than math, so I am more comfortable with an algorithm (sequential steps) that a formula.
 
Mathematics news on Phys.org
  • #2
I'm wondering if it is simpler than I'm expecting.

After ball 2 drops, the average is 57 seconds. So ball 15 will drop at 14x57 s.
After ball 3 drops the average is (2:02 / 2 = ) 61 seconds. So ball 15 will drop at 14x61 s.

Ball 7 dropped 3m7s after ball 4. That's (187 / 3 = 62.3) on average for those 3 balls...

Is it possible that I can ignore all the intermediate observations and simply say 9 balls dropped in 7m40s - then project that forward? Does that lead to the most accurate prediction for ball n?
 
  • #3
DaveC426913 said:
Is it possible that I can ignore all the intermediate observations and simply say 9 balls dropped in 7m40s - then project that forward?

Yes you could take this approach, i.e. by estimating the mean with the sample mean. This approach has theoretical merits, and has merit for being quite simple.

(Bayesian thought: is there prior information about balls dropping that is being left out here?)

DaveC426913 said:
Does that lead to the most accurate prediction for ball n?

This is where it gets tricky. What does it mean to be most accurate? In programming, this typically comes down to coming up with a cost function and figuring out how to minimize it. If you are looking at minimizing squared errors, then that corresponds to the mean. There are many different cost functions you can choose from. (Another common one is that L1 norms correspond to medians, but there are many others.)
 
  • #4
A proper linear fit will typically give a better estimate for the next balls. There are tools doing this, but if you are not interested in the uncertainty on the estimate there are fixed formulas.
 
  • #5
A simple best single prediction will suffice.
As n approaches, the margin of error will approach zero.

And I'm going to do it as an algorithm, rather than a formula.

So,
n=2, the predicted latency is 57s.
n=3, the predicted latency is (57+65)/2=61s.
n=4 :: (57+65+45)/3 = 52s..

When I get to n=7 I'm not sure if I calculate it as (5:55-2:48=) 187s/3. That would be giving equal weight to latencies that are derived rather than observed.
 
Last edited:
  • #6
If you sum all differences, your sum is just the difference between the last and first observed event. You give zero weight to the events in between (you throw away most of the information), and if the first or last event are a bit off that has a big impact on your extrapolation.
 
  • #7
It would be good to know the rules of the game before playing. Do we have, for instance, a parameterized distribution of outcomes where we are trying to estimate the parameters based on the outcomes? Maybe something like "each ball drops in a normal distribution with mean m and standard deviation s centered on a nominal drop time at a regular interval i"?
 
  • Like
Likes NTL2009
  • #8
jbriggs444 said:
It would be good to know the rules of the game before playing. Do we have, for instance...
I ... don't know.
 
  • #9
I think the linear fit / least squares thing would do the trick.

In looking it up, I see most places suggest you'd be wanting a calculating machine (i.e. computer) just to do the calcs, since it's not simple, even for the simplest data sets.

I was hoping my billiard balls example would elicit a simple answer so simple even I could grasp it.

I'll show you what I'm actually doing:

http://www.davesbrain.ca/science/plates/ (It takes ten seconds or so to render).

I'd like to predict when a given plate (such as CRY) can be expected to be observed.

My primitive calculation function simply averages the delay between each subsequent data point. (some are even negative).
 
  • #10
A linear fit should work.

Things to consider:
- the rate of license plates handed out will vary with time, especially with the seasons, but also with some long-term trend (population changes, cars per capita and so on). Fit over a long timespan to get rid of fluctuations, but don't fit linearly over 10+ years.
- the sighting efficiency might vary with time
- if you have precise dates (like your own license plate) they can be treated differently because you know a date that is certainly in the introduction range.
- you can estimate the time from introduction to spotting a plate by looking at the spread of the data points, or by looking how often you see a few given plates where you know their introduction period is over (e.g. all CC).
 
  • #11
mfb said:
Things to consider:
- the rate of license plates handed out will vary with time, especially with the seasons, but also with some long-term trend (population changes, cars per capita and so on). Fit over a long timespan to get rid of fluctuations, but don't fit linearly over 10+ years.
Yep. My current primitive algorithm predicts my desired target in 2024. Seven years.

mfb said:
- the sighting efficiency might vary with time
Definitely.
I can never know if a given plate is the first one I've encountered, but not seen. So, my diligence in observing has a direct impact on the data.

I started this when my commute was 50km each way. Now it's 15.

mfb said:
- you can estimate the time from introduction to spotting a plate by looking at the spread of the data points, or by looking how often you see a few given plates where you know their introduction period is over (e.g. all CC).
Yep. I wonder if it's possible to analyze the data to make an educated guess at the average delay from introduction to observation.
 
  • #12
DaveC426913 said:
Yep. My current primitive algorithm predicts my desired target in 2024. Seven years.
You know Ontario gives out vanity plates, right? ;)

Do you have the data in a more accessible format than this?
 
  • #13
mfb said:
You know Ontario gives out vanity plates, right? ;)
Yes. But that would be a false alarm. Heralding the Dawn of a New Age is not something one wants to cheat at.
mfb said:
Do you have the data in a more accessible format than this?
Whachoo talkin' bout Willis? JSON is one of the most flexible, universal formats there are
.
 
  • #14
DaveC426913 said:
Whachoo talkin' bout Willis? JSON is one of the most flexible, universal formats there are
There is no numbering that would directly correspond to the available plates, but I converted it to csv, deleted the unused rows and introduced a new numbering now. Three entries are odd:

CCG 2017-06-04 unused - what is this?
What about CCK, where an observation has been missing for a very long time now?
CBH has a very odd observation date.
 
Last edited:
  • #15
mfb said:
There is no numbering that would directly correspond to the available plates, but I converted it to csv, deleted the unused rows and introduced a new numbering now. Three entries are odd:
Architectural and data integrity.

I can never be certain that the 'unused' plates will never be observed. I have strong theory why they aren't observed (see next item) but my data should not assume such a bias. For all I know, the rules could change and those patterns could start showing up. Or current patterns could change and disappear.

CSV is OK if you're just dealing with a list, and all the items in the list are structurally identical.
But it doesn't deal well with:
  • non-list items (such as extra meta-data that is not part of the data array itself. Maybe the footnotes could be another object in the json.) CSV would need another file.
  • items of indeterminate length (some rows have no tags, some have an array of multiple tags, JSON easily accommodates this).
  • changes in functionality. I can always add more functionality to some rows as I expand the features of the app. New properties on a few rows will not hurt the existing functionality of other rows. (A CSV would require a new column for every single row, even if 99% of them have a null value).
Finally, the numbering column you added is not part of the data, as I see it, so I don't think it should be in the data. It can be derived from the data easily enough, since it's going to end up in an array, which means it will be indexed, but without imposing an artificial constraint on the raw data. (It's just another bias that might have to be re-engineered if the app changes.)

:smile:

mfb said:
CCG 2017-06-04 unused - what is this?
See the footnote:
Some plates (notably --G, --I, --O, --Q and --U) are never put into circulation. Presumably, they are disqualified as being too ambiguous and easily confused with other letters.
And indeed, I have yet to see any of these.

mfb said:
What about CCK, where an observation has been missing for a very long time now?
One of the most interesting. It is not on the standard list of disqualified combinations., yet I am certain it will never be seen.
I'll wager a large sum that it is disqualified by rules about profanity on plates.

mfb said:
CBH has a very odd observation date.
Why? Because it lags behind its forebears? It's not the only one. CBA, CBX, CCA and several others are late to the table as well.
This is statistically unsurprising in such a scenario.

CDE has never been spotted to-date. That's quite possible - considering my observation method is far from perfect.
 
Last edited:
  • #16
I don't say anything against json for the website, but for fitting a csv or similar without unused rows is just much more practical. That's why I asked. Anyway, I converted it now.
DaveC426913 said:
See the footnote:
I read the footnote, but CCG has a date. Did you observe it (making the footnote wrong) or not (making the entry wrong)?

I don't find profanity for CCK (or anything else interesting), but let's assume it is unused for some reason.
DaveC426913 said:
Why? Because it lags behind its forebears? It's not the only one. CBA, CBX, CCA and several others are late to the table as well.
This is statistically unsurprising in such a scenario.
I found the issue with CBH. You have two CBH in your data, the second one should be CDH. To get rid of unused items I sorted by plate, so CDH, which was introduced 8 months after CBH, appeared between the other CBH and CBJ, making it a massive outlier.Okay, now everything is cleaned up apart from the CCG observation date issue.
 
  • #17
mfb said:
I don't say anything against csv for the website, but for fitting a csv or similar without unused rows is just much more practical.
I think the unused rows are almost as important as the used ones from a posterity POV. I'd hate to destroy valid information by making the data ambiguous..

mfb said:
I read the footnote, but CCG has a date. Did you observe it (making the footnote wrong) or not (making the entry wrong)?
Ah. That would be a lazy copy-paste error. Thanks for catching that.
mfb said:
I don't find profanity for CCK (or anything else interesting), but let's assume it is unused for some reason.
It's the license plate version of a rude word. No deeper than that.
mfb said:
I found the issue with CBH. You have two CBH in your data, the second one should be CDH. To get rid of unused items I sorted by plate, so CDH, which was introduced 8 months after CBH, appeared between the other CBH and CBJ, making it a massive outlier.
Ah! Thank you again!
 
  • #18
Cleaning up more:

A simple fit suggests date(x) = 6.290 * x + 42585 where x is the plate index (CAA being 1, unused plates don't count) and date is the number of days since the 30th December 1999 (thanks, LibreOffice?). I started the fit at CAT to exclude the first plates. Starting it at CAZ leads to date'(x) = 6.068 * x + 42598. Excluding the very last data point I get date''(x) = 6.349 * x + 42583.

Excluding everything before CBJ which has surprisingly early dates and then a few jumps I get date'''(x) = 5.784 * x + 42615. As you can see there is still some large uncertainty in the fit. The last fit shows the smallest deviations for most of the range and ignores the initial data. I'll use that one.

In the fitted range, the RMS of fit minus observation date is 7.8 days, with some observations much later but not many much earlier, as we expect it.
CDW is an outlier (19.5 days early), but if two more plates in between turn out to be unused it gets normal. Probably something to remove as well as long as the status of the plates in between is unclear.
CDA is an odd outlier, could the month be wrong there? One month later it would fit in nicely.
CDE is probably another one that didn't get used. A few months is more than we expect for non-observations by chance.
 
  • #19
mfb said:
Cleaning up more:
D'oh! Forgive my obtuseness for not realizing earlier that you were prepping the data as a prelude to helping me do the calcs. I'd have been a lot more gracious! o0)

mfb said:
CDA is an odd outlier, could the month be wrong there? One month later it would fit in nicely.
No, this is probably accurate.

mfb said:
CDE is probably another one that didn't get used. A few months is more than we expect for non-observations by chance.
This is an artifact of my technique. I am usually only looking for the latest few plates. It's very possible that I would not notice CDE once I'm looking for CDJ and its descendants. After a certain length of time, there's no point in putting a data point in even if I find it.

Which brings me to another issue:
The document is meant to update itself as I provide new data. Which means, rather than taking a fixed set of data and cleaning that up, it will always be working on the latest dataset. So, whatever algorithm I implement will give a fresh estimate every time the page is loaded - there's no data processing phase, which means no data getting thrown out.

I can't just ignore those outliers, because that's a human decision. Best I could do is either
  • add a condition to the algorthm itself to reject outliers
  • add another flag - say a 'lousy data point' flag, and then add that manually to certain records. Unfortunately, as has become apparent, I'm not analyzing the data sufficiently to spot errors - let alone outliers. :sorry:
And I still got to turn this into an algorithm...
 
Last edited:
  • #20
I'm using a Javascript plugin to do the linear regression.I don't care if it's accurate yet, just getting a line that approximates the data.

This is the result I'm getting:
  • intercept:-67.21099273332828
  • r2:0.986173087681175
  • slope:4.988511467716297
I assume that means:
  • first data point is at -67,0
  • each datapoint index i on the x-axis is .98 of a tick (i.e. for i =10, x = 9.8)
  • a given datapoint's
    • x value will be r2 * i
    • y value will be at slope * r2 * i
(I'm not sure why the intercept is negative but doesn't render as negative in the chart.)

linear-reg.png
So, if I am to render the extrapolation line, I need to provide two data points.
One will be [-67,0] and the other will be [iΩ, y]
where y is iΩ * slope

linear-reg2.png
 

Attachments

  • linear-reg.png
    linear-reg.png
    12.5 KB · Views: 379
  • linear-reg2.png
    linear-reg2.png
    9.5 KB · Views: 430
Last edited:
  • #21
DaveC426913 said:
No, this is probably accurate.
If it is accurate CDA was introduced way earlier than the overall trend suggests.

Adding an "outlier, ignore" flag is probably the best approach.

I don't know the JS plugin you use, but the plotted line for observations made me curious: How do you handle plates not seen yet?
DaveC426913 said:
(I'm not sure why the intercept is negative but doesn't render as negative in the chart.)
Because your chart doesn't start at plate 0. It starts at ~18. If you would start at zero the fit would show negative values.

R2 is a measure of the fit quality, 1 is perfect (everything is on a straight line). It doesn't imply any step size anywhere.
 
  • #22
mfb said:
If it is accurate CDA was introduced way earlier than the overall trend suggests.
Sure. It's probably a sighting of a car fresh straight from the MTO. It's bound to happen.

(I surmise that I can use this as the minimum possible time for a plate to go from the office to being spotted, and then examine the average delays of other plates.

It would be another line with the same slope that passes through that data point. No other data point could appear below that line without itself becoming the new shortest delay.)

mfb said:
Adding an "outlier, ignore" flag is probably the best approach.
I think I will allow the outliers. I don't think they'll affect the prediction nearly as much as the very nature of my observation method.
The observation method is very unreliable, being dramatically affected by miles traveled per day (which changed quite dramatically), location (also changed quite dramatically), mood and boredom of observer (which, I'm consternated to say, also changes quite dramatically), etc.

It doesn't make sense to pick just some possibly sketchy data without junking the whole thing.

mfb said:
I don't know the JS plugin you use, but
I used flot.js to do the rendering (grr. It only supports jQuery. I'm using angular 1.5, so I have to load jQuery as well, which is counter-indicated by angular.)
For the actual linear regression calc, I just stole the function from a stack overflow thread. I'll upload latest, so's'n you can look in the app.js file. (Or I'll just paste it below.)

mfb said:
the plotted line for observations made me curious: How do you handle plates not seen yet?
You mean like CCK and CDZ, whose time frames have passed? Or do you mean like CZZ, which hasn't been released yet?

Of course you mean the former. I filtered out null dates as well as 'unreliable' and 'unused'.

Now that I think about it though, I shouldn't filter out null dates. They still have a valid x , even if their y is not known.No. Even that's not true...

In the case of CDZ, it was surely released, I just never spotted it. So, upon release, 21,000 plates were issued: CDZA 000 through CDZZ 999.

But, in the case of CCK, there would (probably) be no such release. CCJZ 999 would be immediately followed by CCLA 000.

Shoot that is an unresolvable ambiguity in the data - an intractible problem. I cannot know if a given sequence added 21,000 plates to the road or if it added zero.

mfb said:
Because your chart doesn't start at plate 0. It starts at ~18. If you would start at zero the fit would show negative values.
But I didn't start at -67. :oops:

mfb said:
R2 is a measure of the fit quality, 1 is perfect (everything is on a straight line). It doesn't imply any step size anywhere.
Ah. Then I'm using it wrong. It sounds like I shouldn't use it at all in the x,y calcs.
 
Last edited:
  • #23
linearRegression takes two arrays: one of y values, one of x values.

Code:
        $scope.linearRegression = function(y,x){
            var lr = {};
            var n = y.length;
            var sum_x = 0;
            var sum_y = 0;
            var sum_xy = 0;
            var sum_xx = 0;
            var sum_yy = 0;

            for (var i = 0; i < y.length; i++) {

                sum_x += x[i];
                sum_y += y[i];
                sum_xy += (x[i]*y[i]);
                sum_xx += (x[i]*x[i]);
                sum_yy += (y[i]*y[i]);
            }

            lr['slope'] = (n * sum_xy - sum_x * sum_y) / (n*sum_xx - sum_x * sum_x);
            lr['intercept'] = (sum_y - lr.slope * sum_x)/n;
            lr['r2'] = Math.pow((n*sum_xy - sum_x*sum_y)/Math.sqrt((n*sum_xx-sum_x*sum_x)*(n*sum_yy-sum_y*sum_y)),2);

            return lr;
        }
It returns
Code:
intecept: -67.21099273332828
r2: 0.986173087681175
slope: 4.988511467716297
 
  • #24
DaveC426913 said:
Sure. It's probably a sighting of a car fresh straight from the MTO. It's bound to happen.
Not in the amount seen here.
If all the plates were issued at the same point in time you would expect an exponential distribution for the spotting time, modified a bit by the driving patterns. With 6 days per plate this pattern gets disturbed a bit for the first days. Based on the variance you spot most patterns within the first two weeks. If the CDW datapoint is accurate it means you spotted all of the plates around it nearly a month late. That is extremely unlikely.
DaveC426913 said:
Now that I think about it though, I shouldn't filter out null dates. They still have a valid x , even if their y is not known.
If there are plates issued, keep them for the index of x, but with an unknown y you can't feed them into the fit as data point.
DaveC426913 said:
Shoot that is an unresolvable ambiguity in the data - an intractible problem. I cannot know if a given sequence added 21,000 plates to the road or if it added zero.
You can look for CDZ now.
DaveC426913 said:
But I didn't start at -67.
The plates did.
 
  • #25
mfb said:
Not in the amount seen here.
If all the plates were issued at the same point in time you would expect an exponential distribution for the spotting time, modified a bit by the driving patterns. With 6 days per plate this pattern gets disturbed a bit for the first days. Based on the variance you spot most patterns within the first two weeks. If the CDW datapoint is accurate it means you spotted all of the plates around it nearly a month late.
Good point. That was something I've been wondering: do I have enough data to get a good estimate of the latency from release to sighting. It sounds to me like you are pretty sure I do.

And, as you point out, if all the other data has a relatively small divergence from a particular latency average, then it is statistically highly improbable - not that this one was spotted very early - but that so many others were consistently spotted so late.

Which is what you just said...

mfb said:
If there are plates issued, keep them for the index of x, but with an unknown y you can't feed them into the fit as data point.
Yeah. But there still needs to be a placeholder for it. Not sure what one does here. Derive a y value (by interpolation) that has no effect on the linear regression?
That'd be weird. I'd have to revisit that data point after I know its predecessor and its successor to derive a median value. Even that would only approximate the true slope.

mfb said:
The plates did.
Forgive my obtuseness. How?
 
  • #26
DaveC426913 said:
Yeah. But there still needs to be a placeholder for it. Not sure what one does here. Derive a y value (by interpolation) that has no effect on the linear regression?
That'd be weird. I'd have to revisit that data point after I know its predecessor and its successor to derive a median value. Even that would only approximate the true slope.
Derp! Hit me while I was driving.

Because this dataset is not represented by a scatterplot!
It's just a plain old plot.
The x-axis data values are not recorded values; they are indexed - they're integers.

And that's how you get placeholders. Every x-value is represented - it's just that some don't have a y value.

'unused' records do not get in the chart. (because CBH is followed immediately by CBJ). Everything else does. (Though null dates still have ambiguous status.)

Actually, in theory the chart should clear up the ambiguity! I should be able to see if a given plate was passed over, or if it added 21,000 cars to the road while I wasn't looking.
 
  • #27
DaveC426913 said:
Yeah. But there still needs to be a placeholder for it. Not sure what one does here. Derive a y value (by interpolation) that has no effect on the linear regression?
Every good fitting tool will accept data points with x/y values, just don't give one with the missing data.
DaveC426913 said:
Forgive my obtuseness. How?
CAA was introduced about 70 days before you started looking for plates. If you choose the date 0 to be the first observation, then the introduction of CAA will get a negative date value. I don't see what would be surprising about this.
DaveC426913 said:
Actually, in theory the chart should clear up the ambiguity! I should be able to see if a given plate was passed over, or if it added 21,000 cars to the road while I wasn't looking.
In principle this should be possible, but I don't know if the observations are good (early) enough to do this confidently.
 
  • #28
mfb said:
Every good fitting tool will accept data points with x/y values, just don't give one with the missing data..
Right but that plate does exist on the x-axis.
If I were to simply skip it, then the slope would jump vertically and be inaccurate.
mfb said:
CAA was introduced about 70 days before you started looking for plates. If you choose the date 0 to be the first observation, then the introduction of CAA will get a negative date value.
Right. Of course. I was looking in the raw data, not the extrapolated line.
 
  • #29
I've realized another headache with my observation technique.
As the months (and even years) pass, I really only need to take a few observations, to be sure that the slope is still on-track, and if not, the new observations will straighten it out.

But imagine I leave it for a year and then decide to check back in and make a few observations. I spot, say, CHD. Is that the first presence of it on the road? Or has CHD been on the road for 2 months now? No way to know. Even if I wait for CHE and then CHF, I still don't know that I spotted the first CHD. It's exactly the same conditions as when I first started observing. The first 9 or so plates must be tagged at unreliable. It may take 8 or 9 weeks of observation to get one new reliable data point.
 
  • #30
DaveC426913 said:
Right but that plate does exist on the x-axis.
If I were to simply skip it, then the slope would jump vertically and be inaccurate.
Sure. Your dataset has "plate 54: date, plate 55: date, plate 57: date, plate 58: date, ...".
DaveC426913 said:
The first 9 or so plates must be tagged at unreliable. It may take 8 or 9 weeks of observation to get one new reliable data point.
I would estimate about a month with the previous observation density.
Once your target comes close, a few months of regular observations will help to reduce the error, especially the question how much average lag there is between introduction and first observation. For the first observation, it might be useful to write down how early within the given range the plate is. (CRRA or CRRZ?)
 
  • #31
mfb said:
Sure. Your dataset has "plate 54: date, plate 55: date, plate 57: date, plate 58: date, ...
Right. Which is why you originally introduced a distinct index.

I was treating the array index as the x-axis.
i.e. my values would end up being:
item[23]: CDJ
item[24]: CDK
item[25]: CDM
So, CDL is not there, and therefore no gap where there should be a gap, putting the next item out of line.

For you:
item[23]: {index:23, plate: CDJ}
item[24]: {index:24, plate: CDK}
item[25]: {index:26, plate: CDM}
So the trend is conserved.

I see now.
 
  • #32
I think what you may be looking for is Kalman Filter.
 

1. When will ball 25 drop?

The exact time and date of when ball 25 will drop cannot be predicted with certainty. It will depend on various factors such as the initial velocity and height of the ball, air resistance, and any external forces acting on the ball.

2. How can data be used to predict future observations?

Data can be used to analyze patterns and trends, which can then be used to make predictions about future observations. By collecting and analyzing data, scientists can identify relationships between variables and use this information to make informed predictions.

3. What types of data are needed to predict when ball 25 will drop?

To predict when ball 25 will drop, data such as the initial height and velocity of the ball, air resistance, and any external forces acting on the ball are needed. Other factors, such as the shape and weight of the ball, may also need to be considered.

4. Can predictions about ball 25 dropping be accurate?

Predictions about ball 25 dropping can be accurate to a certain extent, but they are not guaranteed to be 100% accurate. The accuracy of predictions will depend on the quality and quantity of data used, as well as any unforeseen variables that may affect the outcome.

5. How can uncertainty be accounted for when predicting ball 25 dropping?

Uncertainty can be accounted for by using statistical methods to calculate a range of possible outcomes. This range of outcomes can help to determine the likelihood of a certain event, such as ball 25 dropping at a specific time, and account for any potential uncertainties in the prediction.

Similar threads

Replies
2
Views
2K
  • Introductory Physics Homework Help
Replies
7
Views
2K
Replies
2
Views
4K
Replies
7
Views
4K
  • General Math
Replies
19
Views
1K
  • Introductory Physics Homework Help
Replies
2
Views
2K
  • General Math
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
17
Views
2K
  • Introductory Physics Homework Help
Replies
4
Views
7K
Replies
6
Views
2K
Back
Top