**★ ★**

*TL;DR*: The simplest explanation of Bayesian methods and probabilistic
programming I’ve come across. Says a lot about the field that this book
was still extremely difficult to get through. Enjoyed the writing style
quite a bit, and thought the examples were fascinating.

## Notes

Simple definition of Bayesian Inference: **updating your beliefs after
considering new evidence**. We can get more and more confident, but never
absolutely sure!

My notes on this book are in a series of Colab Notebooks, following the book’s format of being written in Jupyter notebooks:

- Introduction
- Example: text message data inference (“can you detect the inflection point in a user’s behavior given their text message counts by day?”)

- A Little More on PyMC
- Example: A/B testing – finding a distribution for the delta of two sites, Site A and Site B, to probabilistically determine which performed better in the A/B test and by how much.

- Opening the Black Box of MCMC
- Example: unsupervised clustering of a dataset around two normal distributions using a mixture model.

- The Greatest Theorem Never Told
- Example: how to order Reddit submissions (factors: vote count, time passed, etc.)

- Loss Functions
- Example: optimizing for The Showcase on The Price is Right
- Example: Bayesian Kaggle submission for observing dark matter

- Priors
- Example: predicting stock returns

Overall, this is some of the most challenging material I’ve worked on in any field, not just machine learning. I’m really grateful for this book, since I was completely drowning in the material before I found it. I believe in the power of probabilistic programming and modeling, and I look forward to exploring it more with libraries like Uber’s Pyro and Google’s Tensorflow Probabilities.

After reading this book I don’t feel like I have a great handle on MCMC or building PyMC models, but I have at least enough of a foundation in vocabulary and concepts to begin branching out.