If skeptical theism is right, then any event may have ramifications far beyond our ken, ramifications that dwarf the original event in significance. Some folks--notably Graham Oppy and our own Mike Almeida--have argued that this means that if skeptical theism is right, we ourselves do not have reason to prevent great evils, such as rapes and murders, because for aught that we know great good will come of them.
While I no longer accept skeptical theism, I think this argument is mistaken. I am going to be very rough probabilistically here. To do this precisely, would require bringing in appropriate measures of correlation, and then the post would be snowed under with technicalities. The technicalities in this case are important, and I have not worked them out, so maybe what I say will ultimately fail. But let's try to be rough here.
Any action has foreseeable and unforeseeable consequences. We all agree on this, even if we are not skeptical theists. The only difference is that the skeptical theist thinks that the unforeseeable ones may be much larger than we think. Now, there are, basically, three possibilities about the space of all possible actions:
- There is no correlation between the values of foreseeable and unforeseeable consequences.
- There is a positive correlation between the values of foreseeable and unforeseeable consequences.
- There is a negative correlation between the values of foreseeable and unforeseeable consequences.
The quick version of my rough argument is this. Even given skeptical theism, we have no reason to accept (3). But given either (1) or (2), we should prevent evils when the foreseeable consequences are good, without worrying unduly about unforeseeable consequences. For unless there is a negative correlation between the values of the foreseeable and unforeseeable consequences, it is not more likely that something unforeseeable and bad will come of preventing the evil than that something unforeseeable and good will come of preventing the evil.
Skeptical theism makes (1) plausible. Naturalism makes (1) or (2) somewhat plausible (evolutionarily, we would likely develop choice-procedures that have beneficial unforeseeable consequences). In either case, we have no reason to accept (3), I think.