How playing poker helped me think more clearly about data

Jack Rich
7 min readMar 31, 2021

--

We’re bad at thinking statistically.

As Daniel Kahneman said, our brains are like machines for jumping to conclusions. We hate uncertainty and ignore the role of luck. We see patterns and stories where they don’t exist. And we’re often blind to data that challenges our pre-existing beliefs.

Yet, thinking statistically is a skill we need in almost everything we do. In any decision we deliberate over, we’re weighing up some sort of data. Whether that’s evaluating the claims we’re bombarded with in the media everyday. Picking our careers. Or finding new strategies to address the world’s most pressing challenges.

So, it’s a bit of a problem.

But, for most, it’s a problem we find easy to ignore.

Poker — A lab for exploring how we think about data

As a professional poker player, I spent years studying ‘Poker Solvers’ — that is, computer programs that calculate Game Theory Optimal poker strategies.

Sparing you the details, with a solver you can set up any poker scenario > click go > then get back a huge amount of data on how every hand in that scenario would be played by two perfect players.

Poker’s way too complex to just copy everything the solver tells you. So your task was to understand why the solver was telling you to play a certain way. With clear principles, you could then generalise strategies across thousands of scenarios and adapt to novel situations on the fly.

In essence, it was about spotting the signal in the noise of a whole lot of data.

This task was a great laboratory for exploring our problems with statistical thinking. Because you could gather data, develop hypotheses and get good feedback on their validity in a matter of days. And your only real incentive was to be accurate. So the learning cycles were fast and focused on just you and the data.

Unfortunately, the feedback I got rarely had good things to say about my natural instincts…

So many questions, so little time

The first challenge with solvers was choosing the scenarios to ask our new robot overlord for help in. With only so many hours in the day, we had to define a limited number of questions, among a basically unlimited number of possibilities. Which created a lot of room for error.

Whenever we uncovered a new principle that explained a bunch of weird results it felt great. Running more scenarios to see if it unravelled did not. So there was always a tendency to under-test our hypotheses. And that was easy to justify based on resources — as we couldn’t test everything, could we?

When we did run a test, I’d find myself automatically trying to explain away any conflicting information it dished up. It felt so natural to craft a fancy rationalisation that allowed me to chalk it up as an exception, then get back to the fun stuff.

Essentially, whenever I was balancing the trade-off between resources and further research, there was always gravity towards doing what was least disruptive to my current understanding.

That’s pretty dumb, when the whole point was to find better strategies. And the less they resembled my current understanding, the more opportunity they offered to increase my edge.

But conflicting results meant;

  • My current understanding was wrong and I was less awesome than I thought.
  • I’d have to study more and play the game I loved less.
  • I’d have to move from being clear to being uncertain.

These all felt painful. And, intuitively, that was pain I always tried to avoid.

JesusToastFace

This was made worse by the way our brains jump to conclusions. When we see noise, we naturally look for signal, even if it’s not there. So infinite data can just give us infinite ways to find a pattern that supports what we want to believe.

There were many times I’d stitch together a story that supposedly explained the exact reason why the solver was telling us to play a certain way. It’d feel so right. I’d have no shortage of great logic to back it up and I’d keep seeing other data that confirmed it. Yet, as soon as I’d run a robust test, all that clarity would come tumbling down.

This affliction had a few nasty elements:

  • It couldn’t be switched off. I could rarely just sit with uncertainty — accepting that I needed more data before any meaningful conclusion could be made. With whatever limited data I had, my brain was always busy spinning up a story which could send me off in the wrong direction.
  • The stories it cooked up weren’t an objective estimation of what was right, either. My brain naturally latched onto things I wanted to believe. Stories that avoided the painful task of going back to square one and often ended with ‘I was right all along!’
  • To top it all, the principles that conflicted with my prior beliefs never benefited from this auto-search function. This is where the biggest learning was, yet uncovering these principles was usually a much more painful process — if I wasn’t blind to them entirely.

We called this thinking JesusToastFace. Like people are primed to see faces (often Jesus’) in random places (sometimes toast), whenever we thought we uncovered a new principle of good poker strategy, we’d ask ‘Is this just JesusToastFace again?’ And the answer was often yes.

Assessing what worked

Once we finally reached the tables to apply our new robot-approved strategies, there was always some doubt about whether they were actually good. But, in poker, the luck factor means that a winning strategy could lose for months or even years. So assessing what worked wasn’t easy.

The problem was, when I won, my brain would tell me it was because I was the boss. But when I lost, my brain would tell me I was just unlucky.

This was particularly problematic when my strategy was the result of 100s of hours of solver research. I had mathematical “proof” that I was right and this idiot across from me kept getting lucky. What injustice! When it served my ego, it wasn’t difficult to forget the fact that my ‘proof’ might be based on a narrow data set that I analysed badly.

To really uncover mistakes, I needed to be laser focused on all the weak signals — how my opponents were reacting, how their approach differed from mine, or where new solver data might put old principles into question. Then I needed to re-start the cycle of running tests and analysing the data. But, the whole time, there was this internal lawyer in my head, telling me not to raise the alarm.

You suck too

If you’ve taken any interest in behavioural science, I doubt any of this is new to you. Behavioural scientists have fancy names for most of what I’ve described (for example; what you see is all there is, theory-induced blindness, confirmation bias, the narrative fallacy, the representativeness heuristic)

What was unique about poker was how often I got to see myself making these errors. This meant the problem — that I was bad at thinking statistically — wasn’t just an intellectual curiosity or something I’d spot in others. It was something I absolutely had to work on.

In most other paths in life, you can get by barely noticing these errors. When we make decisions to improve our lives, our workplaces and our societies, it can be months or years before we get any feedback on whether we were right. And it’ll usually be ambiguous, in a way that makes it easy for us to rationalise away any harsh truths. So this problem can be really easy to ignore.

But poker players are good at this stuff, and I was good at poker. And by any objective standard of rationality, I suck at thinking clearly about data. So you almost certainly do too.

That means, whether you’re assessing hard numbers, case studies, stories or feelings, you’ve probably made lots of errors in reaching the conclusions you hold dear.

And if you thought more clearly about that data, you might find a lot of room for improvement.

So, what’s the solution?

The solution is not to ‘unbias’ yourself. Behavioural scientists have shown that these errors in thinking aren’t quirks we can easily overcome. They’re the product of millions of years of evolution.

Learning all the names of your favourite biases won’t help much either. Turns out, even the behavioural scientists studying this stuff are seeing a lot of jesus in their toast — meta studies have shown that a majority of their findings can’t be replicated.

What I’ve seen in everyone who’s good at this is a certain mindset:

  • Humble: They recognise their own fallibility and are prepared to be wrong — a lot. Even if they’re confident in their abilities relative to others, they recognise their limits vs. a complex world.
  • Curious: They reflect on their bias, play devil’s advocate with themselves, and really listen to alternative perspectives.
  • Proactive: They ask the difficult question, even when everyone else feels clear. They test their hypotheses, even if unravelling the clarity they’ve built up would be painful.

In Phil Tetlock’s research into expert judgement and forecasting, he identifies similar traits outside the bubble of poker too. As he says, these people treat their beliefs as “hypotheses to be tested, not treasures to be protected”.

With the right mindset, these clearer thinkers then practice interventions that help them limit the impact of their bias. The biases don’t go away. But, with enough practice, the interventions can slowly but surely become automatic too.

In poker, these habits emerged for us:

  • Question your questions, before you ask them: What other questions could you ask? What will your data not tell you? Make limitations explicit or else they’ll disappear from later discussion.
  • Make proper predictions, before collecting data: When you’re wrong in a way that can’t be ignored, it’s much harder to rationalise away conflicting evidence and keep your current understanding intact. And it’s the best feedback you’ll get to help you learn.
  • Always ask; is this just JesusToastFace? Your brain is a machine for jumping to conclusions, so question it’s outputs. Do you have sufficient data to make any meaningful conclusion? Does your story stand up to scrutiny? Where practical, test your hypothesis in a rigorous way, however right it feels.
  • Be social and talk about bias: it’s much easier to spot bias in others. So share stories about your own bias and try to make it a safe thing to talk about in your team.

What works for you will depend on your context. So cast your net widely for inspiration (Tetlock’s Superforecasters is a great place to start) and start experimenting.

In all, there’s no shortcut. It’s like building any other skill — practice, reflect, learn. The good news is that you don’t need any fancy technical expertise. Just a willingness to make mistakes and learn from them.

The world needs more statistical thinkers. So get started :)

--

--

Jack Rich
Jack Rich

Written by Jack Rich

Ex-poker pro, now trying to make work better.

No responses yet