Does risk aversion cause diminishing marginal utility, or vice versa?

In summary: This assumption is important because it allows us to express all bets in terms of their utility, rather than their monetary value. This allows for a more accurate representation of preferences and risk aversion.However, even with a well-defined utility function, humans do not always follow the rationality axioms. This means that their preferences may not always align with the expected utility framework, making it difficult to accurately determine their risk aversion.In summary, the von Neumann-Morgenstern theorem states that a person's preferences can be described by a utility function, and a person is considered to be risk averse if their utility function is concave in terms of money. However, the causation between risk aversion and the shape of the utility function
  • #1
lugita15
1,554
15
Let A be the set of possible states of the world, or possible preferences a person could have. Let G(A) be the set of "gambles" or "lotteries", i.e. the set of probability distributions over A. Then each person would have a preferred ordering of the states in A, as well as a preferred ordering of the lotteries in G(A). The von Neumann-Morgenstern theorem states that, assuming your preference ordering over G(A) obeys certain rationality axioms, your preferences can be described by a utility function u: A → ℝ. (This function is unique up to multiplication of scalars and addition of constants.) That means that for any two lotteries p and q in G(A), you prefer L1 to L2 if and only if the expected value of u under L1 is greater than the expected value of u under L2. In other words, you maximize the expected value of the utility function.

Now just because you maximize the expected value of your utility function does not mean that you maximize the expected value of actual things like money. After all, people are often risk averse; they say "a bird in the hand is worth two in the bush". Risk aversion means that you value a gamble less than expected value of the money you'll gain. If we express this notion in terms of the von Neumann-Morgenstern utility function, we get the following result: a person is risk averse if and only if their utility function is a concave function of your money, i.e. the extent to which you're risk averse is the same as the extent to which you have a diminishing marginal utility of money. (See page 13 of this PDF.)

My question is, which direction does the causation run? Do the values of the von Neumann-Morgenstern utility function reflect the intensity of your preferences, and is risk aversion due to discounting the preferences of future selves who are well-off compared to the preferences of future versions of yourself who are poorer and thus value money more (as Brad Delong suggests here)? Or does the causation run the other way: does your tolerance for risk determine the shape of your utility function, so that the von Neumann-Morgenstern utility function tells you nothing about the relative intensity of your preferences?

Any help would be greatly appreciated.

Thank You in Advance.
 
Last edited:
Physics news on Phys.org
  • #2
Is it really appropriate to call this risk aversion? It is risk aversion in terms of your bank account numbers, but not in terms of your utility*. A concave money->utility function is very natural: if you have no money at all, life is hard. If you have some money, you gain a lot - like a bed and a proper food supply. If you have 1000 times that money, the bed can be more comfortable, but that is not worth 1000 times the simple bed.

*actually, humans don't follow the required rationality axioms.There is another type of risk aversion: risk aversion within the utility function.
Let's assume you have a well-defined, known utility function, so we can express all bets in terms of their utility. You start with an account of "1" utility-unit and you can use this to participate in bets (negative values are not allowed). I will offer you two bets in series:
(A) Give away 1 unit of utility, gain 11 with 10% probability and 0 with 90% probability.
afterwards: (B) Give away 1 unit of utility, gain 100 with 90% probability and 0 with 10% probability.

With that knowledge, you will reject (A) even if it has a positive expectation value - the risk that you lose, and cannot participate in B any more, is too large.

Now consider a modified setup: I don't tell you what (B) will be. Do you accept (A)? If you expect a better bet for (B) (where you need some probability distribution for bets I could offer), it is good to be risk-averse, and reject (A).
 
  • #3
lugita15 said:
My question is, which direction does the causation run?
And my answer is "yes." Or maybe "no."

You have a bit of a map-territory inversion going on here. The map is not the territory (Alfred Korzybski). By asking which is cause, which is effect, you are confusing the map for the territory. The concepts of a utility function and risk aversion are parts of the same map. Cause and effect -- that's asking about the territory.

Note also that this map is not perfect. People are not rational. The same people who buy lottery tickets are also unwilling to invest their hard-earned money in a place where it might grow. The same people who drop tens of thousands of dollars for a frivolous night on the town irrationally pinch pennies elsewhere.
 
  • #4
mfb said:
Is it really appropriate to call this risk aversion? It is risk aversion in terms of your bank account numbers, but not in terms of your utility*.
Well, I'm just using standard terminology here. What you call "risk aversion in terms of your bank account numbers" is what people usually just refer to as risk aversion. Of course, risk aversion can be perfectly rational behavior, because as you said, it doesn't mean that you value a gamble at less than the expected value of the utility, it just means that you value a gamble at less than the expected value of the monetary payout.
A concave money->utility function is very natural: if you have no money at all, life is hard. If you have some money, you gain a lot - like a bed and a proper food supply. If you have 1000 times that money, the bed can be more comfortable, but that is not worth 1000 times the simple bed.
Well, you're interpreting the utility function as representing intensity of preference. But couldn't you just as well view the utility function as a summary of your tolerance for risk?
*actually, humans don't follow the required rationality axioms.
Well, for the purpose of the question, I'm assuming the von Neumann-Morgenstern axioms.
Let's assume you have a well-defined, known utility function, so we can express all bets in terms of their utility. You start with an account of "1" utility-unit and you can use this to participate in bets (negative values are not allowed). I will offer you two bets in series:
(A) Give away 1 unit of utility, gain 11 with 10% probability and 0 with 90% probability.
afterwards: (B) Give away 1 unit of utility, gain 100 with 90% probability and 0 with 10% probability.
Yes, I'm familiar with the Allais paradox. But again, I'm not trying to assess whether humans actually obey these axioms. (That ship has probably sailed by now; there is a whole field called behavioral economics, after all.)
 
  • #5
D H said:
The concepts of a utility function and risk aversion are parts of the same map. Cause and effect -- that's asking about the territory.
Certainly utility functions are a human construct. But here are two things that are presumably not human constructs: attitudes concerning risk, and relative intensity of preferences. Questions of the form "Do you value A to B more than you value C to D?", and questions of the form "Do you value pA + qB to rC + sD?" (where p, q, r, and s are probabilities and A, B, C, and D are outcomes) seem to be questions about the territory, not about the map. Moreover, they seem to be questions concerning different parts of the territory. Yet we have a theorem that somehow says that (assuming certain rationality axioms), a person's answers to questions of the first type completely determine their answers to questions of the second type, and vice versa. This seems to suggest that the answers to one of these sets of questions is more fundamental than the answers to the other set of questions.

The fundamental issue is this: do people have intensities of preference that determine their attitude toward risk? Or do people have attitudes toward risk that makes it appear as if they have certain intensities of preference?

Note also that this map is not perfect. People are not rational.
Point taken. I was just trying to explore the implications of the von Neumann-Morgenstern axioms, not defend them.
 
  • #6
lugita15 said:
Well, you're interpreting the utility function as representing intensity of preference. But couldn't you just as well view the utility function as a summary of your tolerance for risk?
How can you define "risk" without a utility function?
There is no universal axiom "money is good to have", and there is no prior reason to assume that utility is linear with anything. I can keep track of the logarithm of my money, or an exponential function. Why should a utility function be proportional to some specific way to keep track of money?

The Allais paradox is something different - and I think psychological effects shouldn't be neglected there (if you pick 1B, you will really hate that decision in 1% of the cases).
 
  • #7
mfb said:
How can you define "risk" without a utility function?
Certainly you can't really talk about attitudes towards risk without talking about preferences and ordinal utility. But do you really need a cardinal utility function in order to talk about attitudes toward risk?
mfb said:
There is no universal axiom "money is good to have", and there is no prior reason to assume that utility is linear with anything.
I agree with that. When did I assume either of those things?
mfb said:
Why should a utility function be proportional to some specific way to keep track of money?
When did I say that it should?
mfb said:
The Allais paradox is something different
Your numbers seemed pretty similar to the Allais paradox, and the way a lot of people deal with the Allais paradox is by rejecting the von Neumann-Morgenstern axioms, in favor of axioms that accommodate the notion you called "risk in the utility function". But that's irrelevant to this thread.
 
  • #8
lugita15 said:
Certainly you can't really talk about attitudes towards risk without talking about preferences and ordinal utility. But do you really need a cardinal utility function in order to talk about attitudes toward risk?
I don't see how you can define "risk" at all, without at least an implicit notion of utility.
If you consider the risk to lose something as negative, you already attached a utility to the object you might lose.

I agree with that. When did I assume either of those things?
When did I say that it should?
If utility is an arbitrary function of money, it is pointless to talk about risks. You have no way to say what a risk is.
 
  • #9
Here is another way to phrase my question: if you consider preferences over outcomes, you can only define a utility function that's unique up to a monotonous transformation. But if you also consider preferences over lotteries, then you can define a utility function that's unique up to affine transformation, i.e. if two functions u and v represent the same set of preferences over lotteries, then v = a + bu for some constants a and b. What is the reason for that? Is it because your preferences over lotteries reveal more information about your preferences over outcomes, specifically the relative intensities of those preferences? Or is nothing at all being revealed about intensity of preference, with the new information just being about your preferences concerning lotteries?
 
  • #10
mfb said:
I don't see how you can define "risk" at all, without at least an implicit notion of utility.
If you consider the risk to lose something as negative, you already attached a utility to the object you might lose.
This may just be a matter of semantics. You're making a statement about your preference ordering (i.e. that you prefer to have something than to lose it), but that doesn't mean you need to invoke a cardinal utility function. But yes, you are making a statement about utility, just ordinal utility.
mfb said:
If utility is an arbitrary function of money, it is pointless to talk about risks. You have no way to say what a risk is.
Risk is uncertainty concerning what outcome will occur. When I said "attitudes toward risk", all I meant is "attitudes toward uncertain outcomes".
 
  • #11
lugita15 said:
Here is another way to phrase my question: if you consider preferences over outcomes, you can only define a utility function that's unique up to a monotonous transformation. But if you also consider preferences over lotteries, then you can define a utility function that's unique up to affine transformation, i.e. if two functions u and v represent the same set of preferences over lotteries, then v = a + bu for some constants a and b. What is the reason for that?
Consider three outcomes a,b,c, where you like a less than b and b less than c.

Define the utility of a to be some arbitrary value A and the utility of c to be some arbitrary value C with C>A.

Now consider a bet: you can either get b with 100% probability, or c with a probability of p and a with a probability of 1-p. Choose p in such a way that you are indifferent between both options. This allows to assign b the utility B=pC+(1-p)A.

You can do the same for every set of 3 possible outcomes to fix the utility values of every outcome, and the only freedom is the initial choice of the utilities for two outcomes (here: A and C).

Is it because your preferences over lotteries reveal more information about your preferences over outcomes, specifically the relative intensities of those preferences?
Right. Lotteries allow to compare three outcomes quantitatively.
 
  • #12
mfb said:
Consider three outcomes a,b,c, where you like a less than b and b less than c.

Define the utility of a to be some arbitrary value A and the utility of c to be some arbitrary value C with C>A.

Now consider a bet: you can either get b with 100% probability, or c with a probability of p and a with a probability of 1-p. Choose p in such a way that you are indifferent between both options. This allows to assign b the utility B=pC+(1-p)A.
Yes, that's how von Neumann-Morgenstern utility functions are constructed, but the question is, how do you know that the value that this procedure assigns to b really represents how much you like b compared to how much you like a and c? In other words, how do you know that (C-B)/(B-A) represents the extent to which you value c to b compared to the extent to which you value a to b? Couldn't it be that that number instead reflects your feelings concerning uncertain outcomes vs. certain outcomes, and tells you nothing about the actual relative intensities of your preferences concerning a, b, and c?

For instance, suppose that a was "getting 0 dollars", b was "getting 5 dollars", and c was "getting 10 dollars". And suppose that you're indifferent between getting 5 dollars on the one hand, and having a 75% of getting 10 dollars and a 25% chance of getting 0 dollars on the other hand. Then B would equal .25A + .75C. One way to interpret that is to say that going from 5 dollars to 10 dollars matters one third as much to you than going from 0 dollars to 5 dollars. But couldn't the explanation instead be that you value the 5 dollars more because it's a "sure thing", and you prefer certain outcomes to uncertain outcomes?

Right. Lotteries allow to compare three outcomes quantitatively.
But do the values produced by the lotteries just have to do with your feelings about the three outcomes, or could they be telling you something about another set of preferences you have, one dealing with uncertainty?
 
  • #13
Now you are asking better questions. You are essentially asking whether expected utility theory has any utility at all. Economists have found multiple issues with expected utility theory. Some, most notably Matthew Rabin, have gone so far as to say that it has no utility whatsoever. Expected utility is not just pining for the fjords. It is, in Rabin's words, an ex-hypothesis.

Rabin, Matthew. "Risk aversion and expected‐utility theory: A calibration theorem." Econometrica 68, no. 5 (2000): 1281-1292.A better view is that the expected utility hypothesis is a simplification of reality. Physicists and engineers do this all the time. Ohm's law. Newton's laws of motion. Sometimes the magic works, sometimes it doesn't. The problem with Rabin's criticism is that the magic does work sometimes. There does exist some economic space where the hypothesis is valid. The problem with rejecting his criticism is that the magic oftentimes doesn't work. The space where the theory is valid is bounded. It is not universally true.
 
  • #14
D H said:
Now you are asking better questions.
I've been trying to ask the same fundamental question all along: assuming that the von Neumann-Morgenstern axioms are correct, do the relative intensity of your preferences determine your preferences concerning uncertainty, or do your preferences over lotteries arise seperately than your preferences over certain (i.e sure) outcomes? (I phrase the question a bit more mathematically in my post #9, in terms of the set of transformations that the utility function is unique up to.)
D H said:
You are essentially asking whether expected utility theory has any utility at all.
No, I'm not. I'm taking for granted the assumption that people maximize the expected value of their von Neumann-Morgenstern utility function. So my question isn't whether people maximize expected utility, but WHY they maximize it (assuming they do). I think there's basically two possible answers to my question:

1. Assuming the von Neumann-Morgenstern axioms are right, the shape of the von Neumann-Morgenstern utility function reflects the relative intensities of your preferences over certain (i.e. sure) outcomes, and that determines your preferences concerning uncertain outcomes, via the expected value of the utility function.

2. Assuming the von Neumann-Morgenstern axioms are right, the shape of the von Neumann-Morgenstern utility function tells you absolutely nothing about the relative intensities of your preferences. Instead, it summarizes your preferences concerning uncertain outcomes. That is why preferences concerning uncertain outcomes can be so easily gleamed from calculating the expected value of the utility function.

D H said:
Economists have found multiple issues with expected utility theory. Some, most notably Matthew Rabin, have gone so far as to say that it has no utility whatsoever. Expected utility is not just pining for the fjords. It is, in Rabin's words, an ex-hypothesis.

Rabin, Matthew. "Risk aversion and expected‐utility theory: A calibration theorem." Econometrica 68, no. 5 (2000): 1281-1292.
That paper is about types of risk aversion that (he claims) cannot be modeled in expected utility theory, specifically "loss aversion". But my goal is more modest: I'm trying to understand the source of the kind of risk aversion that IS compatible with expected utility theory.
 
Last edited:
  • #15
lugita15, I think your question is really interesting but the discussion has lost me at various points.

I've always seen 'risk aversion' and 'diminishing marginal utility' (in the context of VNM) as two pieces of language to describe the same phenomenon, so at first it didn't strike me as a well framed question.

Could you elaborate a little on your post #9, and about linear and affine transformations? I think you guys have a very good grasp of maths here and I'm not so good so go easy!
 
Last edited:
  • #16
wigglywoogly said:
lugita15, I think your question is really interesting but the discussion has lost me at various points.

I've always seen 'risk aversion' and 'diminishing marginal utility' (in the context of VNM) as two pieces of language to describe the same phenomenon, so at first it didn't strike me as a well framed question.
Yes, there is a theorem that for von Neumann-Morgenstern utility functions, risk aversion and diminishing marginal utility are equivalent. So as you say, they're both ways of describing the same underlying phenomenon, but the question is, what is that phenomenon? Here are two possible answers:

1. The values of the von Neumann-Morgenstern utility function u genuinely reflect the intensity of a person's preferences for the various states in A. So for instance, if u(c)-u(b) = 3(u(b)-u(a)) for three states a, b, and c in A, then that means that the person prefers c to b 3 times as much as they prefer b to a. And a person's preferences concerning the lotteries in G(A) are merely a reflection of the intensities of their preferences over the states in A. In other words, people tend to maximize the expected value of their happiness, taking their intensity of preferences into account.

2. The values of the von Neumann-Morgenstern utility function u does not reflect the intensity of a person's preferences. Instead, it reflects a person's preferences concerning risk, for instance the fact that they prefer a sure outcome to a risky outcome, even if it means getting a lower payout. The only reason that people tend to maximize the expected value of their von Neumann-Morgenstern utility function is that the values of that function are determined by their inherent preferences concerning the lotteries in G(A).

So to summarize, the question is whether intensity of preferences determine preferences for lotteries and the values of the vnM utility function, or whether inherent attitudes toward risk independent of intensity of preferences determine the values of the vNM-utility function.
wigglywoogly said:
Could you elaborate a little on your post #9, and about linear and affine transformations? I think you guys have a very good grasp of maths here and I'm not so good so go easy!
Sure, I'm happy to elaborate. Suppose this is the ordering of my favorite foods, from least favorite to most favorite:

1. Salad
2. Pretzel
3. French Fries
4. Pretzel

Given this information, how would you make a utility function for me? Well, we can let u(Salad) = 1, u(Pretzel) = 2, u(French Fries) = 3, and u(Pizza) = 4. This function satisfies the criterion that u(x) > u(y) if and only if I prefer x to y. But the thing is, this is not the only function that satisfies this criterion; here's another one: v(Salad) = 2, v(Pretzel) = 343, v(French Fries) = 500, and v(Pizza) = 1 trillion. So which one is my utility function, u or v? Well, in some sense they both are, because they both satisfy that criterion. The values of v may look nothing like the values of u, but the important thing is that v preserves the ordering of values of that u has. The fancy way of saying that is that v is a monotonic transformation of u. So the conclusion is that if the only information you have is the ordering of preferences over the states in A, then you can only define the utility function upto a monotonic transformation, which means that if you construct a utility function u for a given preference ordering, someone else can construct a utility function v, a monotonic transformation of u, that will work just as well. Or in more technical language, ordinal utility functions (functions constructed from a preference ordering over A) are unique only upto a monotonic transformation.

Now let's consider the vNM case. The von Neumann-Morgenstern theorem states that if a person's preferences over G(A) satisfy certain conditions, then there exists a function u from A to R such that the person tends to maximize the expected value of u. Just like in the case of ordinal utility functions, u is not the only function which satisfies this criterion. But this time, if you just do any arbitrary monotonic transformation of u, it won't necessarily satisfy the criterion. There are only two operations you can perform on u while maintaining the criterion.

1. Multiplying u by a positive constant

2. Adding a constant to u.

So that means that if u satisfies the criterion, only functions of the form a + bu, known as affine transformations of u, will satisfy the criterion. To put it in technical language, von Neumann-Morgenstern utility functions are unique up to affine transformations, as opposed to ordinal functions which are only unique up to monotonic transformations. So a given person has a lot of ordinal utility functions representing them, but has much fewer vNM-utility functions representing them.

What that means in practical terms is that a vNM-utility function conveys more information than an ordinal utility function (which only conveys the preference ordering over A). Specifically, if u and v are two vNM-utility functions for the same person, then (u(z)-u(y))/(u(y)-u(x)) = (v(z)-v(y))/(v(y)-v(x)). (This follows from the fact that v must equal a + bu for some constants a and b.). So that number is independent of what vNM-utility function you use, so the question is, what information does that number convey? Does it show how much you prefer z to y compared to how much you prefer y to x? (That would mean that the vNM utility function reflects the intensities of your preferences over A.) Or does it reflect the fact prefer a 100% chance of getting y compared to a certain probability of getting x and a certain probability of getting z, because you value a "sure thing" rather than taking too much risk? (That would mean that the vNM utility function reflects your inherent preference ordering over G(A).)

Whew! I didn't intend to make this post so long, so tell me if you have any questions.
 
Last edited:
  • #17
Jeez. I just started reading 'Theory of Games and Economic Behaviour' to get some clarity on this - it's fun but quite dense reading.

So am I right in thinking monotonic are the weakest type of transformations, in the sense that they preserve the least amount of information from the original function (just the ordering)?

Also, am I right in thinking VNM utility functions represent cardinal utility?

In the linguistic sense, it strikes me that risk aversion CAUSES the diminishing marginal utility. After all, risk aversion is a property of the agent and diminishing marginal utility is a property of the thing being valued. Taking for granted that the agent's feelings are the most proximate causes of utility functions and their shapes, this must be true, but maybe I am missing the point.
 
  • #18
For risk aversion read fear. Yes it does lead to diminishing returns and for the obvious reason. I have been a victim of the 'Is it safe?' ('Marathon Man,' film quote) mentality, where people are not willing to take risks that boil down to no risks at all. New ideas are lost because of this but are a sign of awareness and aging as the old are more aware of what they have to lose, where the young, having nothing are more aware of what they have to gain (Young here means coming from nothing, not necessarily physical age, although I'm sure you can see the analogy).
 

1. What is risk aversion and diminishing marginal utility?

Risk aversion refers to an individual's preference for avoiding risk or uncertainty when making decisions. Diminishing marginal utility is the principle that as an individual consumes more of a good or service, the additional satisfaction or utility gained from each additional unit diminishes.

2. How are risk aversion and diminishing marginal utility related?

The relationship between risk aversion and diminishing marginal utility is often debated. Some argue that risk aversion causes diminishing marginal utility, as individuals are willing to give up potential gains in order to avoid potential losses. Others argue that diminishing marginal utility leads to risk aversion, as individuals may be more willing to take risks in order to increase their overall utility.

3. Are there any studies that have explored the relationship between risk aversion and diminishing marginal utility?

Yes, there have been several studies that have examined the relationship between risk aversion and diminishing marginal utility. These studies have produced mixed results, with some supporting the idea that risk aversion causes diminishing marginal utility and others supporting the opposite view.

4. Can risk aversion and diminishing marginal utility be influenced by external factors?

Yes, both risk aversion and diminishing marginal utility can be influenced by external factors such as past experiences, cultural norms, and personal beliefs. These factors can shape an individual's perception of risk and their utility preferences.

5. How does understanding the relationship between risk aversion and diminishing marginal utility impact decision making?

Understanding the relationship between risk aversion and diminishing marginal utility can help individuals make more informed and rational decisions. It can also help businesses and policymakers design more effective strategies for managing risk and maximizing utility for individuals and society as a whole.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
970
  • STEM Academic Advising
Replies
4
Views
9K
  • General Math
Replies
1
Views
2K
  • General Math
Replies
2
Views
6K
  • Beyond the Standard Models
Replies
1
Views
2K
  • Sci-Fi Writing and World Building
Replies
3
Views
2K
Replies
11
Views
3K
  • General Math
Replies
13
Views
9K
Replies
34
Views
5K
Back
Top