Picture this: It’s a bright summer day and you decide to take a visit to your local botanical gardens to enjoy the sunny weather. While you are strolling down the path, you spot a patch of lavender flowers and lean down to smell the sweet aroma. You smile as you begin to inhale the calming scent, but then quickly jump back with a shriek, “Ahh!” You almost fall over when a fellow garden-goer catches you by the arm. “Thank you,” you begin to reply, now steady on your feet. “I was so caught up in the moment I definitely didn’t see the…” your voice fades away as you turn your head to face what had startled you. Out of the corner of your eye you could have sworn you caught a glimpse of a snake! But alas, it is just a garden hose. You feel a little silly; how could you have possibly mistaken a harmless hose for a scary snake?
Thankfully, there is a perfectly logical explanation for this (and no, it doesn’t involve getting your vision checked). The cause of the mistaken identity actually had nothing to do with what you saw. Rather, it was due to the amount of time your brain had to process the visual information before it. Psychologists have coined the term “System 1 and System 2” to describe the two different ways our brains process information. Essentially, System 1 helps us make quick judgements in the presence of complex data, while System 2 allows us to make deliberate, logical decisions given all that data. Having these two systems makes sense evolutionarily – the caveman who instinctively jumps away from what could be a snake is much better off than the caveman who decides to give what may or may not be a snake the benefit of the doubt and spends time considering whether what is before him is truly a danger.
If you have read Thinking Fast and Slow by Daniel Kahneman, then you are probably already familiar with this concept – and if you haven’t, then I highly suggest it as a summer read! System 1 is a highly useful tool to us as humans, even now as much more evolved beings than our caveman ancestors. However, because it is a quick “shortcut” route of thinking, it is also prone to errors and mistakes, and when left to its own devices, can actually lead us astray when making decisions. In psychology terms, we call these shortcuts “heuristics” and the errors they can cause are known as “cognitive biases.” Unfortunately, these heuristics and biases are not just limited to recognizing dangers in the wild, but can impact our decisions about money as well.
One of the most prevalent cognitive biases that is present when making decisions about money is the concept of Loss Aversion. In short, losses generate a larger negative reaction relative to the positive reaction generated by gains of the same amount. In fact, research by Daniel Kahneman and Amos Tversky has shown that losses are felt over 2 times greater than gains. That is, losing $100 feels more than twice as bad to us than gaining $100! It should be of no surprise then why we are rarely concerned when things go right, but are quick to scrutinize when they don’t. For example, budgeting and saving for retirement are (unfortunately) not always top of mind during the “accumulation” years when income easily allows for the lifestyle desired. However, once faced with a potential loss of said lifestyle in retirement due to a lack of financial resources, suddenly budgeting and savings becomes an easy priority! Loss aversion is also one of the reasons we still talk about the financial crisis of 2008 and are so concerned with a repeat of such a shock to the market. Even in the face of the longest bull market in history, we can’t seem to shake how much that one loss hurt.
“Framing,” also studied by Kahneman and Tversky, as well as its extension of “Choice Architecture,” a term coined by Richard Thaler and Cass Sunstein, also directly impact the decisions we make about our money. These concepts tell us that the way in which information is presented to you actually impacts the choice you end up making. A classic example used to illustrate framing is in regards to the decision to become an organ donor. Some countries use an “opt-in” question, and as a result only about 15% of people choose to become organ donors. However, in countries that ask an “opt-out” question, over 95% of people end up organ donors! Similarly, studies have shown that if an employer sets up their 401(k) as “opt-in,” on average about 40% of people chose this option. Conversely, when the 401(k) is set up as “opt-out,” participation rates increased to 90%. Again, the question at hand is the same in both cases: to participate or not to participate. But depending on how the choice is presented, it can actually elicit a completely different decision.
When I was a kid, I really loved reading Aesop’s fable about the tortoise and the hare. Even though the hare is fast and can get to the end of the race quickly, he – spoiler alert – doesn’t actually win in the end. As an adult, we know this didactic story has relevance in a host of different aspects of our lives, and now we’ve learned this includes our decisions about money. So, the question becomes what we can do to counteract these shortcuts and avoid making System 1 mistakes in the first place. Unfortunately, while System 1 processing is hard-wired in us, we can help System 2 do more of the work. How? Well, you’ve already mastered the first step, which is simply awareness. Next, learning more about these heuristics and biases, recognizing when we are succumbing to System 1, and practicing letting System 2 better guide our decisions is the best way to keep these errors at bay. Thankfully, your Portfolio Counselor is here to coach you through this process when it comes to your decisions about money.