Can I clarify I understand? Definitely correct me or elucidate more if I'm way off. Are you saying that in some specific situations, acting based on a false believe can bring successful results? Like even though the belief is false almost all the time, the actions the person takes moves them forward better than willy-nilly making uncohesive decisions based on changing beliefs? Can it be applied to a broader context - like beliefs that aren't necessarily false, but at the current time can't be said to be true? For example, take an author about to publish their book, since that's what I know most ;-) If they have the belief "This book is going to be a bestseller and bring in enough income to enable me to write at leisure" and then they proceed to constantly base their marketing and publishing actions on this belief, presumably that will mean them taking consistent and cohesive action and, therefore, will make them almost certainly more successful than if they'd had no consistent belief to follow at all? Sorry if this is a stupid question!
Thanks, Sonya! I appreciate the clarification question and definitely not at all a stupid question. I realized after your and Sarah's questions, and then re-reading the post a couple of days after writing it, that it doesn't give enough detail to really make the points I wanted to make clear and compelling, and that the post would benefit from a enough added context to do that. They're fun examples and the Substack is a great place for unpacking them a little bit. But I'll wait till the papers are published before doing an updated version here.
Anyway, the short answer is, yes, that's *one of three or so main messages*. And, yes, I think it can (and should) be applied to a much broader context, not just about false beliefs, but as you suggest about rational coherence more generally (having justified true beliefs, but also adhering to the accepted rules of math or logic or models of rational choice). And, yes, definitely, to the extent it applies to transparently false beliefs like the blackjack example or the "islands move" example, it also applies to beliefs we might at one point have believed to be true but later realized were naive, like your publishing example.
I love that example, by the way! It's one I've heard sometimes from moral philosophers (especially of the Christian sort; I think it's sort of captured in St. Augustine's confessions): we set the virtue bar high despite the fact that humans are inherently fallible / sinning creatures, because by setting that bar high, one gets *closer* to being perfectly virtuous than they otherwise might). Mostly I hear it in my field (psychology and especially cultural psychology), justifying the (*some argue mostly *American*) idea that it's good to raise our children to believe in themselves, even to believe they can do anything they put their minds to, because if you raise them to shoot for what they can most realistically achieve (average), they'll tend to under-achieve their potential. It's a great contrast where we both live and to what--maybe especially under communism--has sometimes been the other extreme: to assume we can identify potential at a young age and to strive to make sure children are raised to know their realistic limits.
In my narrower field (decision science), the idea of *coherence* has often been treated as a necessary condition for rational decision making (assuming the beliefs seem directly relevant to making good decisions, like being well calibrated about outcome likelihoods in blackjack or realizing that the boat is moving and the islands are still when using navigation triangulation techniques to find a distant island). Both of those pseudo-beliefs (pseudo, because in these cases even the people committing to them know the beliefs to be false), have been interpreted by theorists to be examples of domain-specific irrationality.
That is especially true of the blackjack example, and the attribution has gone beyond domain specificity to imply species-level inbuilt irrationality: Probability outcome calibration is central to rational choice theory related to how people should make choices under risk and uncertainty (expected utility theory). That entire model was derived using gambling as a metaphor for all decisions under uncertainty, and it assumes that to make good decisions you should become better calibrated about outcome likelihoods with experience (or at least seek to do so). But it would appear that blackjack players (a) become *worse* calibrated with experience (over-estimating the frequency of 10s), while (b) getting *better* at making blackjack decisions. Indeed, it is specifically because their decision process ignores domain-general models of rational choice and embraces a heuristic that works that they become worse calibrated: the adaptive heuristic promotes false likelihood beliefs.
Quickly, there are two other messages:
(1) Most explanations for heuristics that lead to systematic errors like these focus on inbuilt (species-general) cognitive heuristics and biases and largely overlook the role of culture and learning that is narrowly specific to decision domain like whether to hit or stand in blackjack or how to navigate over long distances with canoes and no compasses or other tech. Imagining that the islands move only works for this specialized method of navigation developed by a particular culture; and imaging all cards will be 10s only works for one subset of casino blackjack decisions; both strategies are widely used because of cultural transmission rather than just individual learning or our genes, and they only work in the narrow domain to which they are applied.
(2) Decision scientists have often (mis)applied domain-general heuristics and biases to explain these examples of putative irrationality. The heuristics (and biases) described in the blackjack examples have been wrongly explained by inbuilt cognitive biases and it has been assumed that they lead to worse rather than better decisions.
Details that unpack and make sense of all this are only in the papers and not really in the post, unfortunately.
"we set the virtue bar high despite the fact that humans are inherently fallible / sinning creatures, because by setting that bar high, one gets *closer* to being perfectly virtuous than they otherwise might" - yes, this is exactly what I mean!
"Details that unpack and make sense of all this are only in the papers and not really in the post, unfortunately." - I imagine it might be pretty complex and not easily explainable in a short-form post online. I wonder if adhering to false beliefs is always only applicable to a single tiny decision rather than across a larger set?
I hope my post didn't imply that adhering to false beliefs is only applicable to single tiny decisions. I just meant that culture-bound heuristics *often* are. IM (false?) O, adhering to false beliefs is the rule more than the exception, largely independent of political persuasion, education, worldliness, or IQ.
Hey, Will- this reminds me very much of what Philip Tetlock and Dan Gardner talk about in Superforecasting. I'm thinking specifically about the story he tells about how Enrico Fermi approached making accurate estimates of answers to problems with very little available data. The example he gives is asking students how many piano tuners there are in Chicago. The method is to break down the question by asking 'what would have to be true for this to happen?' So you could answer the question if you knew things like population of Chicago, the number of pianos in Chicago, and how often they're tuned. If you start with as good a guess as you can make of each sub-problem and then put them back together you get a surprisingly accurate answer, even if you are pretty far off on the individual guesses that go into the final answer. I don't understand Bayesian logic very well, but it seems to me it's kind of similar. So quite different heuristics can work surprisingly pretty well- just because you've broken it down into a series of steps and carried them through consistently- because the wrong probability guesses in one part of the strategy are cancelled out by the good guesses. Something like that. Just spit-balling, really, but the similar results- getting unexpectedly accurate results or outcomes from making a series of potentially 'bad' or 'wrong' guesses- really struck me.
Thanks for the comment, Sarah! Interesting example. Reminds me of "the Wisdom of Crowds" except the "crowd" in your example are all the individual pieces of evidence being averaged together. In the "10-heuristic" example and the "imagine the islands move and the canoe is still" example, however, it's something pretty different, I think: that false belief is exactly what leads to the effective results. It's not other things canceling out the misdirection of the false belief. In the case of the 10-heuristic, it's a direct argument against the idea that being well calibrated about outcome likelihoods is an important predictor of how well people make decisions.
Can I clarify I understand? Definitely correct me or elucidate more if I'm way off. Are you saying that in some specific situations, acting based on a false believe can bring successful results? Like even though the belief is false almost all the time, the actions the person takes moves them forward better than willy-nilly making uncohesive decisions based on changing beliefs? Can it be applied to a broader context - like beliefs that aren't necessarily false, but at the current time can't be said to be true? For example, take an author about to publish their book, since that's what I know most ;-) If they have the belief "This book is going to be a bestseller and bring in enough income to enable me to write at leisure" and then they proceed to constantly base their marketing and publishing actions on this belief, presumably that will mean them taking consistent and cohesive action and, therefore, will make them almost certainly more successful than if they'd had no consistent belief to follow at all? Sorry if this is a stupid question!
Thanks, Sonya! I appreciate the clarification question and definitely not at all a stupid question. I realized after your and Sarah's questions, and then re-reading the post a couple of days after writing it, that it doesn't give enough detail to really make the points I wanted to make clear and compelling, and that the post would benefit from a enough added context to do that. They're fun examples and the Substack is a great place for unpacking them a little bit. But I'll wait till the papers are published before doing an updated version here.
Anyway, the short answer is, yes, that's *one of three or so main messages*. And, yes, I think it can (and should) be applied to a much broader context, not just about false beliefs, but as you suggest about rational coherence more generally (having justified true beliefs, but also adhering to the accepted rules of math or logic or models of rational choice). And, yes, definitely, to the extent it applies to transparently false beliefs like the blackjack example or the "islands move" example, it also applies to beliefs we might at one point have believed to be true but later realized were naive, like your publishing example.
I love that example, by the way! It's one I've heard sometimes from moral philosophers (especially of the Christian sort; I think it's sort of captured in St. Augustine's confessions): we set the virtue bar high despite the fact that humans are inherently fallible / sinning creatures, because by setting that bar high, one gets *closer* to being perfectly virtuous than they otherwise might). Mostly I hear it in my field (psychology and especially cultural psychology), justifying the (*some argue mostly *American*) idea that it's good to raise our children to believe in themselves, even to believe they can do anything they put their minds to, because if you raise them to shoot for what they can most realistically achieve (average), they'll tend to under-achieve their potential. It's a great contrast where we both live and to what--maybe especially under communism--has sometimes been the other extreme: to assume we can identify potential at a young age and to strive to make sure children are raised to know their realistic limits.
In my narrower field (decision science), the idea of *coherence* has often been treated as a necessary condition for rational decision making (assuming the beliefs seem directly relevant to making good decisions, like being well calibrated about outcome likelihoods in blackjack or realizing that the boat is moving and the islands are still when using navigation triangulation techniques to find a distant island). Both of those pseudo-beliefs (pseudo, because in these cases even the people committing to them know the beliefs to be false), have been interpreted by theorists to be examples of domain-specific irrationality.
That is especially true of the blackjack example, and the attribution has gone beyond domain specificity to imply species-level inbuilt irrationality: Probability outcome calibration is central to rational choice theory related to how people should make choices under risk and uncertainty (expected utility theory). That entire model was derived using gambling as a metaphor for all decisions under uncertainty, and it assumes that to make good decisions you should become better calibrated about outcome likelihoods with experience (or at least seek to do so). But it would appear that blackjack players (a) become *worse* calibrated with experience (over-estimating the frequency of 10s), while (b) getting *better* at making blackjack decisions. Indeed, it is specifically because their decision process ignores domain-general models of rational choice and embraces a heuristic that works that they become worse calibrated: the adaptive heuristic promotes false likelihood beliefs.
Quickly, there are two other messages:
(1) Most explanations for heuristics that lead to systematic errors like these focus on inbuilt (species-general) cognitive heuristics and biases and largely overlook the role of culture and learning that is narrowly specific to decision domain like whether to hit or stand in blackjack or how to navigate over long distances with canoes and no compasses or other tech. Imagining that the islands move only works for this specialized method of navigation developed by a particular culture; and imaging all cards will be 10s only works for one subset of casino blackjack decisions; both strategies are widely used because of cultural transmission rather than just individual learning or our genes, and they only work in the narrow domain to which they are applied.
(2) Decision scientists have often (mis)applied domain-general heuristics and biases to explain these examples of putative irrationality. The heuristics (and biases) described in the blackjack examples have been wrongly explained by inbuilt cognitive biases and it has been assumed that they lead to worse rather than better decisions.
Details that unpack and make sense of all this are only in the papers and not really in the post, unfortunately.
"we set the virtue bar high despite the fact that humans are inherently fallible / sinning creatures, because by setting that bar high, one gets *closer* to being perfectly virtuous than they otherwise might" - yes, this is exactly what I mean!
"Details that unpack and make sense of all this are only in the papers and not really in the post, unfortunately." - I imagine it might be pretty complex and not easily explainable in a short-form post online. I wonder if adhering to false beliefs is always only applicable to a single tiny decision rather than across a larger set?
I hope my post didn't imply that adhering to false beliefs is only applicable to single tiny decisions. I just meant that culture-bound heuristics *often* are. IM (false?) O, adhering to false beliefs is the rule more than the exception, largely independent of political persuasion, education, worldliness, or IQ.
Hey, Will- this reminds me very much of what Philip Tetlock and Dan Gardner talk about in Superforecasting. I'm thinking specifically about the story he tells about how Enrico Fermi approached making accurate estimates of answers to problems with very little available data. The example he gives is asking students how many piano tuners there are in Chicago. The method is to break down the question by asking 'what would have to be true for this to happen?' So you could answer the question if you knew things like population of Chicago, the number of pianos in Chicago, and how often they're tuned. If you start with as good a guess as you can make of each sub-problem and then put them back together you get a surprisingly accurate answer, even if you are pretty far off on the individual guesses that go into the final answer. I don't understand Bayesian logic very well, but it seems to me it's kind of similar. So quite different heuristics can work surprisingly pretty well- just because you've broken it down into a series of steps and carried them through consistently- because the wrong probability guesses in one part of the strategy are cancelled out by the good guesses. Something like that. Just spit-balling, really, but the similar results- getting unexpectedly accurate results or outcomes from making a series of potentially 'bad' or 'wrong' guesses- really struck me.
Thanks for the comment, Sarah! Interesting example. Reminds me of "the Wisdom of Crowds" except the "crowd" in your example are all the individual pieces of evidence being averaged together. In the "10-heuristic" example and the "imagine the islands move and the canoe is still" example, however, it's something pretty different, I think: that false belief is exactly what leads to the effective results. It's not other things canceling out the misdirection of the false belief. In the case of the 10-heuristic, it's a direct argument against the idea that being well calibrated about outcome likelihoods is an important predictor of how well people make decisions.