On small Bayesian updates

Today I was walking home from dental x-rays (having taken a large detour to walk by the riverside) listening to the latest episode of the excellent Practical Stoicism podcast, where the topic of atheism was touched up upon. It got me tempted to write an entry about atheism and identity or something a bit broader, but it also got me thinking about the empirical question about the existence of God (an old favorite and a topic for yet another blog post). The crux of the matter is this: If we suppose an involved personal God, that theory makes predictions clearly different from predictions made by atheism/naturalism such as effectiveness of prayer in treating disease, and proposes experiments such as those described in Kings 18:16-40: suffice to say most believers wouldn't dare to put these to the test. If we suppose a God that no longer interacts with the world or never did, this theory makes the exact same predictions as naturalism, but loses HARD due to the immense complexity penalty of the hypothesis (this is a bit handwavey, but one reckons at minimum as complex as the human brain), compared to the sort of dynamics that physics suggests the word has: the kind that could easily be fitted on the back of an envelope.

But I digress. The thought I had was as follows: most observations do, in principle, require you to update on the existence of God one way or another. For example, it's easy for me to imagine evidence that would quickly convince me that theism is true, such as God appearing in front of me under some guise and effortlessly fulfilling any demonstration of power I might ask of Him, be it a formation of shooting stars appearing and spelling "God is real", or factoring all the numbers in the RSA Factoring Challenge. Every moment that I don't observe this happening, then, is evidence against the existence of God: trivial evidence because God doesn't often make himself known anyway, but the likelihood of me observing something like this at any given moment is higher if theism is true (low but meaningfully different from zero), than if it is false (ε, for all intents and purposes 0). But at the same time, theism with an involved personal God predicts that, for example, with very high probability God would not let a rogue AI destroy all His creations, so any news that suggests that aligning AIs is feasible, is evidence in favor of God (and against atheism). Again, trivial evidence that you could keep observing for trillions of years and not meaningfully alter your credences, but evidence nevertheless.

If we were perfectly rational agents with infinite computational capacity, we'd update like this constantly, in regards to existence of God, and every other possible hypothesis. Suffice to say, we are not. What, then, can we as boundedly rational agents do about this? Simply practicing a healthy dose of epistemic humility would be a good start: realistically, our brains aren't precise enough to do small updates, so we should recognize that we can always find ourselves with trapped priors, failing to make updates even when we ought to. Besides the obvious application in epistemology, depression, phobias, and other psychological issues likely have something to do with this, at least in some of the cases (e.g. a person with cynophobia - fear of dogs - is absolutely convinced that dogs are terrible creatures out to kill him, and any experience with dogs is analyzed in this light such that the strong prior colors the perception of the experience with the dog to such a degree that it is always interpreted as further evidence of dogs being terrible - further evidence of the prior having been right all along). Recognizing this failure mode exists in the first place may enable you to compensate.

And how would you go about compensating? One practical idea that I got was to not even try to update directly, but file the experiences in a separate folder. Once the folder starts bursting at the seams, or it becomes topical to revisit the question for other reasons, THEN take the deluge of small bits of evidence all at once, and think if all of them together ought to be counted as trivial, low, or even sizable chunk of evidence in the favor of the hypothesis in question. Or, simply being less absolute about things in general: for instance, if you are absolutely convinced dogs are frightful creatures, or that your significant other has turned into the most loathsome person on the planet, or whatever, stop and think whether all the available evidence TRULY supports this hypothesis, or if it is in fact the case that you've in fact had a lot of good experiences, but your brain always failed to update on any of those.

Comments

Popular posts from this blog

Insight: On losing and losers

Philosophy with a deadline