Markov, find a selection strategy which maximizes probability

In summary, the conversation discusses a homework problem involving three patches for animals to forage in, each with different levels of risk of predation, probability of finding food, and energy gain. The goal is to find a patch selection strategy that maximizes the animal's probability of survival over 20 foraging periods. This can be solved using a standard Markov Decision Problem, with the reward function being 1 if the energy level is greater than or equal to 4 at the end of 20 foraging periods and 0 otherwise.
  • #1
skaterboy1
13
0

Homework Statement


There are three patchs in which an animal can forage.
Patch 1: Risk of predation is 0. Probability of finding food is 0. Energy value is 0.
Patch 2: Risk of predation is 0.004. Probability of finding food is 0.4 and energy gain is 3.
patch 3: risk of predation is 0.02. Probability of finding food is 0.6 and energy gain is 5.

Foraging in any patch uses one unit of energy reserves. Energy reserves below 4 indicate death. maximum energy capacity for animal is ten units.

Solve this problem for 20 foraging periods to find a patch selection strategy which maximizes the animals probability of survival over this period.

Homework Equations



?

The Attempt at a Solution



Three patchs; N=3
States: risk, food findings and energy gain.
action: Choosing patchs 1, 2 or 3.

How do I solve this problem?
 
Last edited:
Physics news on Phys.org
  • #2
You could try to set it up as a standard Markov Decision Problem, where you want to maximize the expected value of a reward function. In this case, the reward is r = 1 if energy at t = 20 is >= 5 and is 0 otherwise. You need to identify states, decisions and decision-dependent transition probabilities.

RGV
 
  • #3
Ray Vickson said:
You could try to set it up as a standard Markov Decision Problem, where you want to maximize the expected value of a reward function. In this case, the reward is r = 1 if energy at t = 20 is >= 5 and is 0 otherwise. You need to identify states, decisions and decision-dependent transition probabilities.

RGV

Why is the reward r=1 if energy is >= 5 (not 4?) and why 0 otherwice?
 
  • #4
Do you have any sample problems to show me?
 
  • #5
skaterboy1 said:
Why is the reward r=1 if energy is >= 5 (not 4?) and why 0 otherwice?

OK, it should be r = 1 if energy >= 4 and 0 otherwise.

RGV
 
  • #6
skaterboy1 said:
Do you have any sample problems to show me?

Do you not have a textbook? Are there no lecture notes? Anyway, Google is your friend: try "Markov decision problem".

RGV
 

Related to Markov, find a selection strategy which maximizes probability

1. What is Markov Selection Strategy?

Markov selection strategy is a mathematical tool used to determine the optimal selection of a set of choices that will maximize the probability of a desired outcome. It is based on the concept of Markov chains, which are stochastic models that analyze the probability of transitioning from one state to another over a series of events or decisions.

2. How does Markov Selection Strategy work?

Markov selection strategy works by analyzing the probabilities associated with each choice in a given set and determining the most likely sequence of events that will lead to the desired outcome. It takes into account the current state of the system and the transition probabilities between states to calculate the overall probability of achieving the desired outcome.

3. What are the benefits of using Markov Selection Strategy?

Markov selection strategy allows for a systematic and objective approach to decision-making, as it is based on mathematical principles rather than subjective judgments. It also takes into account the uncertainty and randomness inherent in many real-world scenarios, making it a valuable tool for decision-making in complex situations.

4. What are the limitations of Markov Selection Strategy?

Markov selection strategy relies on certain assumptions, such as the Markov property, which states that the probability of transitioning from one state to another is only dependent on the current state and not on previous states. This may not always hold true in real-world situations, making the results of the strategy less accurate. Additionally, it may be computationally intensive for complex systems with a large number of states and choices.

5. How can Markov Selection Strategy be applied in scientific research?

Markov selection strategy has a wide range of applications in scientific research, particularly in fields such as economics, biology, and engineering. It can be used to model and analyze complex systems, make predictions, and optimize decision-making processes. It has also been applied in machine learning and artificial intelligence algorithms to improve their performance and efficiency.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
18
Views
2K
  • Precalculus Mathematics Homework Help
Replies
29
Views
2K
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
Replies
1
Views
1K
  • Biology and Medical
Replies
23
Views
6K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
996
  • Calculus and Beyond Homework Help
Replies
1
Views
3K
  • Advanced Physics Homework Help
Replies
3
Views
2K
Back
Top