Helena Matute, a psychologist at Deusto University in Bilbao, Spain, and her colleagues enlisted 147 college students to take part in a computer-based task in which they each played a doctor who specializes in a fictitious rare disease and assessed whether new medications could cure it.
In phase one of the study, the student volunteers were divided into two groups — a “high illusion” one that was presented with mostly patients who had taken Drug A and a “low illusion” one that saw mostly patients who hadn’t taken the drug. Each student volunteer saw 100 patients, and in each instance, the students were told whether the patient had recovered. The student volunteers weren’t told that the drug didn’t work — the recovery rate was 70 percent whether or not patients took the drug. Yet, as expected, people in the high illusion group were more susceptible to erroneously concluding that it had an effect.
Presumably, because the student volunteers in the low illusion group had more opportunities to see the syndrome resolve without the drug, they were less prone to assuming that recovery was linked to it. Previous studies have shown that simply seeing a high volume of people achieve the desired outcome after doing something ineffective primes the observer to correlate the two.
Phase two of the study is when things got interesting. The experiment was repeated, except this time some patients simultaneously received two drugs — the ineffective one from phase one and a second drug that actually worked. This time, volunteers from both the high and low illusion groups were presented with 50 patients who’d received the two drugs and 50 who received no drugs. Patients in the drug group recovered 90 percent of the time, while the group that didn’t get meds continued to have a 70 percent recovery rate. Volunteers in the “high illusion” group were less likely than participants in the “low illusion” group to recognize the new drug’s effectiveness and instead attributed the benefits to the drug they’d already deemed effective. The prior belief in the first drug’s potency essentially blocked acquisition of the new information.
“You have to be sure before you’ll destroy what you already know and substitute it with something new,” Matute told me.
More here – FiveThirtyEightScience
Belief is extraordinarily powerful. Understanding this has been a long long process in the clinical sciences. The above study displays several of the recognised errors in human thinking. Hence the evolution in medicine of the gold standard double blind (ie both the operator and subject do not know if they are giving/recieving the real treatment vs a placebo) clinical trial published in a well recognised peer review journal to minimise some of the errors revealed in the above example.
I am only a novice in this area but it is absolutely fascinating and impacts directly on one ability to trade successfully. Here are some of the common clinical errors (substitute trading for clinical in just about every case): My favourite is from a famous late 19 century British obstetrician reflecting on his life “All my mistakes have the same thing in common.“
• Poor coping strategy “disregard of uncertainty”
• Recognise we all have biases (including specialists) that profoundly influence thinking and actions.
• Cast the net wide. What else could it be?
• Is there anything that does not fit? Avoiding confirmation bias – think broadly
• Is there more than one problem?
• Be quiet and listen to the client carefully.
• Be interested.
• Your emotions greatly influence your thinking:
o Negative feelings about the client often results in non-compliance and misjudgement.
o Positive feelings may result in shortcuts or downplaying abnormalities leading to misdiagnosis.
• Competency requires good communication skills
• Many cognitive biases affect us all
o “Availability” heuristic: misattribute a general symptom as specific to a certain disease based on the frequency one encounters
that disease in his practice.
o Anchoring bias: rely to heavily (anchoring) to one piece of information
o Affective error: we value too highly information that fulfils our desires
o Confirmation bias: selectively accepting or ignoring information
• For every diagnosis missed through lack of knowledge, one hundred are missed through lack of looking
• If in doubt, examine the patient (again).
• The “information frame” or diagnosis may be wrong.
• Thinking and action are highly linked in clinicians
• If hesitant to take clinical action based on incomplete data then “Don’t just do something, stand there” Think!
• Talk to colleagues, (other peers, specialists, friends) about your cases
There is an excellent article in Wikipedia about “Cognitive Biases” and also The Yerkes–Dodson law (first framed in 1908) a claimed empirical relationship between arousal and performance.
Perhaps a parallel exists between trading and Evidence Based Medicine. When my wife, Janena, first heard the term uttered by me she immediately quipped, “isn’t that what you do anyway”? Evidence based medicine evolved in the 1980s because of the not infrequent observation that patients treated based on pathophysiological science alone (ie the training doctors receive), often faired poorly compared to those treated on the basis of treatment interventions based on clinical outcomes.
There is a really interesting book “The Signal and the Noise” by Nate Silver that is a great read, and looks at the ability to predict in many arenas of human endeavour and ends up along the lines of striking a balance between healthy scepticism and curiosity.
Peter
Hi Chris
Another good book is The Psychology of Influence by Robert Cialdini
One chapter especially highlights confirmation bias when people who invest emotional energy in a cause or belief will rust themselves onto that belief no matter what evidence becomes available contrary to the rationale or structure of that belief
Rex