Hi everyone, I’ve just got back from some unexpectedly rainy vacation time, so while my kids were playing legos I had time to write something I’ve been thinking about for a while. My previous posts have been about how to write for executives and other busy people. Now, I want to get into how to hone your critical thinking, without which clear writing is worthless.
So in this article, I’ll go through some of the critical thinking traps that I watched out for as an intelligence analyst and manager. In no particular order:
Assuming past is prologue
Single-cause thinking
Mirror imaging
Rational actor fallacy
Absence of evidence as evidence of absence
Confirmation bias
While any list will necessarily be incomplete, I’ve found these common enough to stick in my mind. I’ve also found these come up regularly in situations beyond intelligence analysis, particularly when you have to project what will or may happen in the future. And as the apocryphal saying goes, “predictions are hard, especially about the future,” so it’s good to really check your thinking in those situations.
Ok, let’s get into it. For brevity, I’m going to primarily focus on recognizing the traps here vs. techniques to get to the “right” conclusion, though will touch on some of those.
1. Assuming past is prologue
Let’s say you’re trying to figure out how customers will react to a new version of an existing product. And let’s say this is version 22, for all previous versions sales have been strong, and you’ve thoroughly evaluated what your customers need and how to position the update against the competition. Will sales stay strong for this one? Seems like a good bet, right? (Also I know nothing about sales and product positioning so forgive me…)
Most of the time, the thing that happened yesterday will also happen today. But occasionally, it doesn’t. The trick is to know when you’re in that situation.
This is hard. In fact, it may be impossible because change often happens in unpredictable ways. (Nassim Taleb makes a good case for this in his book “The Black Swan.”) Given that, rather than predict when something will change, I prefer evaluating the variables behind why something might change and then assessing if you’re in a high or low risk of change scenario.
So back to our example, what would you tell an executive asking for your assessment of likelihood of strong sales for version 22?
NOT GOOD: “We have done our due diligence on customer needs and competitive analysis and expect strong sales for version 22, similar to previous versions.”
GOOD: “We expect strong sales because we know that ‘fun’ is the key reason people use our product, we’ve done extensive testing that shows users will continue to perceive version 22 as fun, and our competitors have consistently been unable to match us in this category. However, we know a competitor is seeking to leap-frog us in other dimensions like usability, which if that happens could reduce our sales outlook.”
Again keep in mind I know nothing about sales…but you get the idea. Focusing on the dimensions of what might shift a trend provides more nuance to your assessment and can help you better see changes.
2. Single-cause thinking
(I was trying to think of a better name for this and couldn’t, though it’s similar to the concept of “linear thinking.”)
Say there’s a trend that depends on several factors. If you change one of the factors, the change in the trend should be commensurate, right?
For example, three people are pushing a broken-down car to the shop at 6 mph, with each exerting the same amount of force. (Also keep in mind I know nothing about pushing broken-down cars.) One person gets tired and stops, and now only two people are pushing. How fast is the car going? 4 mph, right?
Well maybe, but maybe the weight of the car is so much that it takes three to get it started, and when one person stops the other two can’t move the car at all. Or maybe the two remaining people are really energized by the challenge of pushing the car themselves and push harder…etc.
The point is, if you only look at one factor and assume any changes in it will map linearly to outcome changes, you may be missing the impact of complex interactions. So don’t forget to check if a change in one thing might change many others.
3. Mirror imaging
We all instinctively gravitate to thinking “how would I react in this situation?” But if you’re trying to figure out what one or many human beings will do, as long as they’re not you, this can lead you astray.
This is particularly true when you’re assessing the behavior of specific individuals, like world leaders, heads of organizations, etc. The factors that go into their decisions range from their whole life experiences to a rational analysis of their options to what friends or advisors are telling them to how they feel that day—a whole bunch of stuff that you by definition can only partially know. While you can understand the constraints on decisions they may make, it’s very hard to predict with certainty.
Therefore, I find it better to lay out a range of scenarios based on what is known about the person and about the situation they’re facing, leaving room for “high impact, low probability” scenarios—basically something that seems unlikely but would be pretty important if it happened, so worth preparing for at least somewhat.
4. Rational actor fallacy
This one is a corollary to #3, and I particularly love this because it’s a reminder to me how mystifying humans are. We often assume from the outside that actions are some part of grand, strategic plan. Well, not always. Sometimes there is no plan, and people are just flying by the seat of their pants.
Think it’s foolhardy that a certain world leader would start a war they can’t win, or a company leader makes what seems to be a clear business blunder? Well it happens all the time. Sometimes it’s a cool yet calculated risk that they’ve thoroughly thought out, and they’re ready for the consequences whatever happens. Sometimes, it’s thought out but based on a rationale that misjudges critical factors, but seems well thought-out to them. And sometimes it’s a more instinctual decision and they do it anyway.
All of these can and do happen, and when assessing what you think someone else will do and why, it’s important to check what assumptions you’re making about their “rationality.” Otherwise we can tend to see grand strategy when there’s nothing there.
5. Absence of evidence as evidence of absence
“We haven’t observed X happening” is not the same as “X didn’t happen.” You might just not have seen it, or been looking for it the right way.
This is a really hard one because there’s no way to prove a negative. By definition, if something didn’t happen, you won’t be able to find evidence of that thing—only of stuff that did happen. So when you’re in an absence of evidence situation, you usually need to look for more to be confident the thing really isn’t happening.
Fortunately you can usually look for indicators of the impact of X here, and if all of those are absent you can be more confident (though still not certain) that X hasn’t occurred. For example:
NOT GOOD: “We haven’t seen any indication that our competitors have created a product that would threaten our market dominance when we release version 22.”
GOOD: “We don’t expect our competitors’ products to threaten sales of version 22 because surveys show the competition continues to lag us significant on whether the product is viewed as ‘fun,’ and while several products outdo ours slightly on perceived usability, this difference has persisted in our previous releases even as we continued to lead the market.”
6. Confirmation bias
This last one may be the hardest. We all have instincts or ideas about what we think is most likely, and sometimes, what we want a situation to be. Even if we go into a scenario keeping this in mind, and trying as best we can to be objective, it is really hard to do.
What to do about this…of course, being aware of one’s own preconceived notions is the first step. Another technique is to intentionally argue the opposite position to what you assume/prefer, laying out the best case possible. There are many more ways to check your thinking, like listing out key assumptions and testing them one by one. This is also why it is really helpful to have multiple people, ideally with different points of view, collaborating on a project—who can argue strongly for different positions and help everyone test their views.
I’m going to stop here, realizing there are more thinking traps but I’ve laid out the ones that first came to my mind. I also know this post was sorely lacking in sci-fi examples. For those who read just for that, come back next time!