If you’ve ever watched the HBO show “Westworld”, the phrase “That doesn’t look like anything to me” has a special meaning. In the show, the ultra-wealthy vacation in what is ostensibly a live-action roleplay theme park, filled with “hosts”, synthetic androids that represent the townsfolk and bandits of an old western. The hosts do not know that they and the park aren’t real, and so guests are free to gamble, rob stagecoaches, rape and murder the hosts, and so on with impunity, experiencing no consequences for their actions (a major theme of the show.) The hosts are simply rebuilt and replaced in the park over time, and thus the dystopian horror show continues. Yet, a major feature of the park and the host’s inability to recognize that they are trapped in a sociopathic playground is a key part of their programming: A literal inability to see information that could make them question the nature of their reality, including photographs, objects not from the old west, and the staff that run the park with their modern attire and technology.
So why the description of Westworld and its unique method of obscuring the truth? Because much like the hosts, there exists a trend in finance to accept certain information without question while utterly ignoring information that contradicts it. To be fair, this isn’t anyone’s fault because it turns out that there are a number of cognitive biases that can prevent even intelligent and rational people from recognizing errors, mistakes, or even the presence of information that challenges their preconceived notions about a topic. This can be particularly true when they are literally invested in the information. As Upton Sinclair observed in his seminal novel, The Jungle: “It is difficult to get a man to understand something when his salary depends on not understanding it.” So today, we’re talking about cognitive biases that can trap us in false information and narratives and ultimately cost us and others greatly.
Three Recent Examples
The First Example: About two weeks ago, I published an article on my own Variable Universal Life (VUL) policy, explaining how it had performed over its first four years; concurrently, I had written an article a few weeks prior on the subject of Indexed Universal Life (IUL) policies for a media outlet, and how they were being aggressively marketed with misleading and incorrect information on social media. Both articles happened to publish in the same week, and the result was that I was buried in an army of insurance agents who frothed at the mouth to argue that the VUL article was misrepresenting IUL policies (which it never mentioned) and that it was poorly structured as an accumulation policy; which I agree on, except in the VUL article I had never claimed it was designed to be a strong accumulation vehicle, and in fact, a good portion of the article was committed to comparing a “traditionally structured life policy” to a “wealth accumulation policy.” In the case of the IUL article, all it discussed was the misleading and incorrect claims of IULs being marketed by unethical agents, yet the same horde of agents rabidly decried that I was defaming the policies, once again claiming that no one would make those claims except someone trying to defame IULs; despite the fact that the entire article was a “false claim and response” format. To the credit of a handful, some did engage in a meaningful and thoughtful discourse (shoutout to Patrick), but the vast majority saw something referencing their favorite product and immediately grabbed their torches and pitchforks, despite the context of what was being discussed.
The Second Example: A non-professional investor of high intelligence recently got into real estate investing through private offerings, often called “syndicates.” Syndicates are no-to-low regulated investment offerings typically presented only to accredited investors (people with high income and/or high net worth), which often tout significant tax advantages for their investors, along with a low correlation to the stock market and significant returns. While all of these claims can be true or false depending on the nature of the offering, real estate syndicates are also rife with fraud and misrepresentations by those who sell them, and so they are fundamentally a high-risk investment, particularly given the lax regulation surrounding them. This high intelligence non-professional investor liked his results from real estate syndicate investing so much that he became a “syndicate educator,” a person who essentially acts like a fanboy of the investment and attempts to “educate”, aka, “market” the investment opportunity to others. In doing so, he began to share claims about the investment on his social media profile, which misrepresented the returns of the investments and touted tax advantages of the investments without disclosing the serious limitations and risks of those advantages. After a brief dialogue on both issues, the real estate educator made the decision to block those who were pointing out these issues in the name of “avoiding an adversarial tone.” Funny how censorship is never thought of as being just that, eh?
The Third Example: There is a major TikTok celebrity who, for years, has built his brand on trash-talking every investment vehicle other than Indexed Universal Life policies. Claims have included that 401(k) plans and Roth IRAs are scams, that the free money from 401(k) matching by employers is a ripoff, and that the IUL policies can be used to generate two to four times the retirement income of traditional retirement savings tools. These claims have been the butt of many financial professional jokes over the years, as such things often are. However, recently this celebrity decided to challenge actual financial professionals on their dismissal of his beloved IUL product. The celebrity provided an experienced Chartered Financial Analyst (CFA: a professional whose education and caliber is that of those who manage millions upon billions of dollars professionally) with the best possible illustration of his policy and strategy. The CFA validated the data from the illustration with the celebrity, confirming that the numbers from their analysis mirrored the expectations of the celebrity, and a few weeks later, published an analysis that found not only that all of the marketing claims about the IUL were false but they were also actually demonstrably harmful, resulting in over a million dollars less in assets and available retirement income than simply saving the same amount of money in a 401(k) plan with a modest 3% match and a Roth IRA. The celebrity’s response has mirrored that of the grief cycle (and apologies in advance for the bad writing):
Denial: “How do I debate fake math. [Product] has been check and validated by real actuaries that work for billion dollar companies… and you publish fake news riddled with errors… the math is so simple. Like how did you miss on it so badly.”
Anger: “you biased is so strong its impossible to take this serious.”
Bargaining: “I want to retract what I said early… this document is pretty freaking cool. Good Job. A few errors but we can fix those… I applaud you for you effort… once we fix a few things I’m ready to go LIVE any time you want. Things to address if you want apples to apples:…”
While depression hasn’t shown up yet, and I’m not sure that we’ll ever see acceptance, it’s fairly clear that this celebrity is unwilling to acknowledge that they handed someone the tools to prove to them that they were wrong.
So What’s Happening Here?
As Dr. John McWhorter likes to put it: “None of these people are crazy, they believe these things for a reason.” But what are those reasons? Well, it turns out that a number of biases are at work here. Let’s explore them.
Confirmation bias is well-known by many as the excitement and immediate acceptance of information that supports pre-existing beliefs and the passive rejection of information that conflicts with these beliefs. This is why you see your very political friends and relatives accept at face value even the wildest of negative claims about “the other party” on the internet while dogmatically defending “their party” to the death, even in cases where there is no reasonable basis for doing so. Confirmation bias lets us accept what we subconsciously deem as “useful and correct information” with essentially no cognitive effort. We expect a ball to fall to the ground if we drop it and do not question at all that it does because it fits our mental framework that it does so. In all three examples, the people in question were being presented with information that conflicted with their worldview, and immediately they became hostile and dismissive of that information rather than accepting it at face value. This leads to our second cognitive bias.
Cognitive Dissonance is the inverse of confirmation bias: it is the mental “nails on a chalkboard” that occurs when we are presented with information that does not fit our worldview. Our brains are complex yet ultimately simple machines, and importantly, they like to operate in the laziest manner as possible. Thus, when we see acceptable information, confirmation bias kicks in, so we don’t question it, and when we see unacceptable information, cognitive dissonance causes us to come to a screeching halt to address the “threat” to our consciousness. When presented with information that conflicts with our beliefs, our brain attempts to contort or twist that information to conform with our preconceived notions. Thus, when someone says something we don’t agree with, we don’t rationally take in that new information and think it through carefully. Instead, we either attempt to “correct” the “misinformation,” or otherwise, our brain begins to fight back against the change that’s being put upon it. Thus, in the first example, when insurance agents saw anything remotely not complimentary of life insurance, they immediately rushed to “correct” the “incorrect” information; despite the fact that nothing was untrue and it wasn’t an attack on their favorite product. In the second example, the highly intelligent individual was so distressed by having it pointed out that he was not being transparent regarding the risks of what he was presenting that he decided to block people rather than deal with further cognitive dissonance. In the third example, despite confirming the accuracy of the information beforehand, the celebrity is still, to the day of this writing, attempting to defend his brand and beloved IUL product, despite the fact that the math is accurate and irrefutable. All of this is aided (or abetted, depending on how you look at it) by the next cognitive bias.
Salience Bias takes a number of forms but is most relevant in this context by this description: The ability for our brains to take in only information we think is relevant and to screen out or ignore information that we don’t think is relevant. A great visual example is in this video. A key component of this bias is that salience bias is not conscious; it is not deliberately ignoring information that isn’t helpful to our beliefs, arguments, or actions, but instead, a passive filter that makes us ignore things that we don’t subconsciously find relevant. This is why when people get into an internet argument, they rarely seem to come to an agreement, and instead, the thing seems just to spiral endlessly. Person A gives evidence that Person B is wrong, Person B dismisses or ignores the evidence that Person A gave and presents their own evidence that they’re right, Person A ignores that evidence, and so on. Thus in the first example, where the insurance agents were presented with an article about a VUL as a traditionally structured life insurance policy that made no mention of IULs, they immediately leapt to critiquing the VUL as a badly structured accumulation IUL, despite the repeated emphasis that it wasn’t for that purpose and was making no claims to the contrary. In the second example, the highly intelligent individual didn’t respond to the criticisms presented but simply began to present other benefits of real estate syndicates rather than acknowledging or addressing the shortfalls, then went to censoring adverse information to avoid addressing it. In the third example, the celebrity not only gave the information for the analysis but has since both denied its accuracy and then endorsed its accuracy while continually questioning its validity in other spaces in a diehard attempt to ignore the underlying reality that his brand is built around false claims.
How do you overcome these biases?
You really can’t. Biases are implicit cognitive heuristics based on your brain’s desire to “shortcut” the effort required to process information that is important or unimportant in the most efficient way possible. If you’re attempting to overcome the biases for yourself, the best thing you can do is try to slow down: “Why am I accepting this or rejecting this at face value without giving it a second thought? The person or information that disagrees with me must be there for a reason; if I assume they’re not just crazy, why are they presenting me with this information the way they are?” That’s about the best you can do without serious effort on a specific topic to overcome your biases surrounding it. In the case of convincing others, you have almost no tools available to force them to overcome their biases. Rather, the best thing you can do is to be as patient as you can with the other party, and to politely accept their position as rational (to them), and to engage in a good-faith manner without falling into the temptation to dismiss them as simply being wrong, crazy, or simply stupid, for not agreeing with you. It’s important to recognize early when faced with conflicting information about our own beliefs that engaging someone in a discussion, debate, or argument on the topic may not result in the satisfying conclusion in which you persuade them of your position (or that you will be persuaded of theirs.) Make a conscious effort to recognize what’s happening, and consider whether you intend to follow this through, and if so, will you and the other party be better people for it? Or will you simply engage in a frustrating exercise at the expense of your time and possibly your own sanity? After all, despite all the good evidence and arguments we might present on any given topic, nothing stops the other party from simply saying: “That doesn’t look like anything to me.”