Bayesian inference allows you to make predictions about what a particular customer will do based on what you know about the whole population and what that customer has done up until now. In short, Bayesian inference derives from Bayes theorem, which states that the probability of a hypothesis *H* being true given the existence of some evidence *E* is equal to the probability that the evidence exists given that the hypothesis is true times the probability that the hypothesis is true before the evidence is observed divided by the probability that the evidence exists. In mathematical terms:

*P(H|E) = P(E|H) * P(H) / P(E)*

*where:*

P(H|E) |
= the probability of H given E (posterior probability) |

P(E|H) |
= the probability of E given H (likelihood) |

P(H) |
= the probability of H before any evidence is available (prior probability) |

P(E) |
= the probability of E (marginal likelihood) |

Note that when comparing the relative probabilities of two hypotheses (such as whether *H* occurs or does not occur), the marginal likelihood can be ignored.

This type of analysis, in the form of a naïve Bayesian classifier, is what email applications use to determine spam from legitimate messages. To apply this to lifetime value analysis, we would consider many possible CLV values, each one forming a different hypothesis. By looking at the occurrences of various pieces of evidence (such as recency and frequency of purchase), we can then determine the relative probabilities of the hypotheses and develop a more accurate customer lifetime value.

Copyright 2018 Custora, Inc.