Discussion about this post

User's avatar
Sonya Lano's avatar

Can I clarify I understand? Definitely correct me or elucidate more if I'm way off. Are you saying that in some specific situations, acting based on a false believe can bring successful results? Like even though the belief is false almost all the time, the actions the person takes moves them forward better than willy-nilly making uncohesive decisions based on changing beliefs? Can it be applied to a broader context - like beliefs that aren't necessarily false, but at the current time can't be said to be true? For example, take an author about to publish their book, since that's what I know most ;-) If they have the belief "This book is going to be a bestseller and bring in enough income to enable me to write at leisure" and then they proceed to constantly base their marketing and publishing actions on this belief, presumably that will mean them taking consistent and cohesive action and, therefore, will make them almost certainly more successful than if they'd had no consistent belief to follow at all? Sorry if this is a stupid question!

Expand full comment
Sarah Shaw Tatoun's avatar

Hey, Will- this reminds me very much of what Philip Tetlock and Dan Gardner talk about in Superforecasting. I'm thinking specifically about the story he tells about how Enrico Fermi approached making accurate estimates of answers to problems with very little available data. The example he gives is asking students how many piano tuners there are in Chicago. The method is to break down the question by asking 'what would have to be true for this to happen?' So you could answer the question if you knew things like population of Chicago, the number of pianos in Chicago, and how often they're tuned. If you start with as good a guess as you can make of each sub-problem and then put them back together you get a surprisingly accurate answer, even if you are pretty far off on the individual guesses that go into the final answer. I don't understand Bayesian logic very well, but it seems to me it's kind of similar. So quite different heuristics can work surprisingly pretty well- just because you've broken it down into a series of steps and carried them through consistently- because the wrong probability guesses in one part of the strategy are cancelled out by the good guesses. Something like that. Just spit-balling, really, but the similar results- getting unexpectedly accurate results or outcomes from making a series of potentially 'bad' or 'wrong' guesses- really struck me.

Expand full comment
4 more comments...

No posts