Back to Silas S. Brown's home page

Why hastily forwarding rumour warnings is a bad idea

In 2025 an otherwise rational person with a PhD in biology was induced to forward to loved ones this variant of the 2016 "WhatsApp Gold" hoax:
Just a quick heads up [I've been] told that there is going to be a Whatsapp Gold. There's a video coming out tomorrow on Whatsapp called Martinelli. Don't open it. It gets into your phone and nothing you can do can fix it. Spread the word if u know anyone. If u receive a message to update WhatsApp Gold * Don't open it!!!! They just announced on the radio and I've looked on Google the hack is serious please send this to everyone
Obviously please don't---if you really look on Google, the top few results (as of the day it was sent) clearly say it's a well-known hoax. There are several red flags: an explicit instruction to send to everyone (Dawkins' "mind virus" code), no names or dates (the meme coder clearly wanted the message to appear to have originated from its most recent sender on its most recent send date no matter how many years it has actually been circulating), a vague appeal to a "radio" authority (which station? which programme?), non-specific threat description, etc, and it wouldn't have taken long to "call its bluff" and verify that Google check before forwarding.

PhD biologists know how viruses spread so I wondered what on earth was the mental process that this parasitical meme had evidently hijacked reasonably well. Not perfectly well---the patient still didn't forward to all their contacts immediately as intended---but well enough for them to forward to loved ones before asking the computer scientist sitting nearby to check, not after.

The patient expressed their reasoning as follows:

If I forward it, the cost of being wrong is inconvenience. If I don't forward it, the cost of being wrong is disaster.

I'm not sure this entirely explains the observed behaviour, since this particular message said the danger will occur "tomorrow" which gave at least 14 hours to check---so it's possible that some other force was also at play but was not stored clearly enough in long-term memory to be referenced when they were reconstructing their reasoning later. Perhaps it induced a brief panic state causing the rational mind to forget that it did in fact have means to check the claims, and perhaps the person's cultural background may have also contributed to a cognitive bias toward believing that it's acceptable to speculatively "branch-predict" rumours as true before evaluating. We can still address the stated risk-averse motivator:

The cost of being wrong is more than inconvenience

  1. If you're wrong and the recipient finds out they may forgive you but they'll still (even perhaps subconsciously) label you as having "alarmist" tendencies and be less likely to take you seriously when you have an actual real warning to pass on---are you sure you want to sacrifice your ability to give them a real warning in future? There is a well-studied phenomenon called alert fatigue when people get desensitised to alerts because they received too many---this is not a good outcome, and this is why civil authorities try to check their facts before issuing a warning: too many warnings results in "cry-wolf syndrome" and people can miss more serious ones. It's as if every source (including you) has an unspoken "budget" for the number of false alarms they're allowed to give before they're no longer believed, and consuming this psychological "warning budget" is the most serious cost of issuing a warning without checking (and if you do so on multiple occasions the damage is cumulative, so it's not a good habit to get into).
  2. If the recipient doesn't find out you were wrong, there's a significant probability that they themselves will pass the message on to others, thereby undermining their own capacity to issue serious warnings in future. Medics studying infectious diseases talk about R numbers: if it's greater than 1 we have exponential growth and everyone who gets infected takes a reputational hit that may impair their capacity to raise an alarm in future---if you alerted two other people, your R number is 2 and we know how bad that is; even making it 1 is chancing it.
  3. Encouraging people to act with urgency on unconfirmed reports may make them more vulnerable to financial scams in future---we don't want to cultivate that kind of overly-rapid response.
  4. For the specific scenario of warning someone not to open malware: The message we want to get across is "learn what genuine updates look like and never trust anything else" like money: if you study real money, you'll be able to spot a fake without having to specifically memorise every single counterfeiting technique that might be in use. If instead we start warning people about particular fakes---even ones that really do exist---the danger is they'll start thinking "that's all I have to worry about" and then become more likely to fall for something else which is not the outcome we want. And if we warn about fakes which don't exist, they may later get tired of the alerts and relax their vigilance against real threats, which again is an undesirable outcome for their safety. Verified examples can be used to help teach principles, but the emphasis should be always on the principle not the specific example.

Separately from all of this, there are real problems with WhatsApp but the danger of "chain letters" can occur on any platform.


Copyright and Trademarks: All material © Silas S. Brown unless otherwise stated.
Google is a trademark of Google LLC.
WhatsApp is a trademark of WhatsApp Inc., registered in the U.S. and other countries.
Any other trademarks I mentioned without realising are trademarks of their respective holders.