When I first started at UMass, I had effectively no background in statistics or probability. So, when I was taking the first course in the graduate stats sequence, I tried to frame what I was learning in terms of things I already understood. When I saw the conditional probability , I couldn't help but think:
Assumption seems to be a close analogy of observation, and if we analyze each construct operationally, they both have a strict order (i.e., observe/assume , then derive/calcuate the probability of ). Both hold fixed in some way for part of the calculation. Suppose we then say that implies with some probability . If we denote this as , then we have some equivalence relation where .
Since is just normal logical implication, with a probability attached, we should be able to use the usual rewrite rules and identities (after all, what's the point of modeling this as a logic if we don't get our usual identities, axioms, and theorems for free?). In classical logic, implication is short for a particular instance of disjunction: . We can then rewrite our probabilistic implication as and say .
Similarly, we want to have the usual rules of probability at our disposal, so by the definition of conditional probabilities, . We can apply the above rewrite rule for implication to say . This statement must be true for all events/propositions and .
Let's take a closer look at a subset of events: those where is independent of , denoted . Independence is defined by the property . From this definition, we can also derive the identities and . Now we can rewrite as . Since the relations on either side are equivalent, we can then substitute the right into the left and obtain . Although this looks a little weird, it's still consistent with our rules: we're just saying that when the events are independent (a notion that has no correspondence in our logical framework), the probability of the implication (i.e., the conditional probability) is wholly determined by -- if happens (which it will, almost surely) then 's marginal is . If never happens (which it won't), then is 0, and the probability of the whole implication is 0.
Now let's consider how this works over events that are not independent. For this example, let's gin up some numbers:
Note that because . Recall that because either or are supersets of , their marginals cannot have a lower probability than their intersections.
Now let's compute values for either side of the equivalence . First, the conditional probability:
Now for the left side of the equivalence, recall the definition of union:
Since we don't have on hand, we will need to invoke the law of total probability to compute it: .
We can now substitute values in:
Now our equivalence looks like this:
which isn't really much of an equivalence at all.
So what went wrong? Clearly things are different when our random variables are independent. Throughout the above reasoning, we assumed there was a correspondence between propositions and sets. This correspondence is flawed. Logical propositions are atomic, but sets are not. The intersection of non-independent sets illustrates this. We could have identified the source of this problem earlier, had we properly defined the support of the random variables. Instead, we proceeded with an ill-defined notion that propositions and sets are equivalent in some way.