How Effective Is a Neural Network Model in Predicting Poker Hands?

Click For Summary

Discussion Overview

The discussion centers around the effectiveness of a neural network model in predicting poker hands, specifically focusing on the statistical methods to estimate the accuracy of the model's predictions. The context involves the application of the model to poker scenarios and the evaluation of its output probabilities against actual observations.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant describes a neural network model that outputs a vector of 169 probabilities representing the likelihood of various poker hands based on specific game situations and actions.
  • Another participant suggests using cross-validation and a test statistic, such as Pearson's chi-square statistic, to assess how well the observed data fits the model's predicted distribution.
  • A participant expresses concern that since the model's output is already a distribution, and observations are from different instances, traditional cross-validation may not be applicable in this case.
  • There is a repeated emphasis on the challenge of testing the model's accuracy given that each observation is unique and derived from different distributions.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best method to evaluate the model's accuracy, with differing opinions on the applicability of cross-validation and the nature of the output distribution complicating the discussion.

Contextual Notes

The discussion highlights limitations related to the uniqueness of observations and the challenges in applying traditional statistical methods to a model that outputs a probability distribution for each situation.

tomeram
Messages
3
Reaction score
0
Hey

I have a neural network model that produces as an output a vector of 169 variables which represents the probability of having a certain hand in poker (2 random cards dealt from a regular deck - 169 possibilities if considering only if the cards are from same suit or not).
The model predict for each spesific situation and action made by a player in the game the distribution of having all possible hands.
I kept a random sample from the data for testing the model, and now I want to test it. Each row in the testing set contains the data of the situation, the action the player made and the hand he had, however the model produce a vector of 169 values (which represent the probability of having each of the possible hand).
I am looking for a statistical method to estimate the accuracy of the model - some kind of method that can say what is probability that the observation came from the distribution produced by the model.
Thanks
Tomer
 
Physics news on Phys.org
Cross-validation to test the accuracy of the model?

To test how well the observations fit the probability distribution produced by the model you could construct a test statistic (eg Pearson's chi square statistic) and repeatedly sample from your model's prob distribution to produce a distribution for the test statistic - this allows you to give a p-value (e.g. test statistic is greater than Z with y% probability) and you can then compare the value of the test statistic from your observations to see how well they fit the model.

Edit: I should say that in order for this to be reliable, you should be training the model on a different data set to the one you're later using to test its accuracy.
 
Hi
Thanks
The problem id that the output of the model is already a distribution - the distribution of getting a certain value. The problem is that each observation comes from a different observation and I need to know if the model predicts the distribution correctly. It is rare to get two point from the same observation, so I have to build a test based on single point from each distribution. I don't think cross validation will help this time.
 
Hi
Thanks
The problem id that the output of the model is already a distribution - the distribution of getting a certain value. The problem is that each observation comes from a different observation and I need to know if the model predicts the distribution correctly. It is rare to get two point from the same observation, so I have to build a test based on single point from each distribution. I don't think cross validation will help this time.
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
5
Views
980
Replies
4
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K