7 mind slips that cause catastrophe – and how we can avoid them

19 Aug

https://www.newscientist.com/article/mg22730340-300-7-mind-slips-that-cause-catastrophe-and-how-we-can-avoid-them/
12 August 2015

“We should accept that accidents will happen”

Everyday cognitive and psychological traps catch us all out, and
occasionally end in disaster. So what are we doing to protect
ourselves?

The human brain is capable of great creative feats – and the odd
catastrophic piece of decision-making. We lose focus or focus too
much, we get scared or overconfident – we succumb to bias: minor
human errors that in our complex world can lead to major disasters.

Fortunately, our growing understanding of what makes us tick is
giving us new ways to avoid these glitches and more – and so harness
our minds to avoid damage to life and limb

Confirmation bias: We only believe what we already think

WHEN BP’s Deepwater Horizon oil drilling rig exploded in 2010, the
flames were visible 50 kilometres away. Before the blowout, rig
staff had tested the concrete seal on a freshly excavated well
before removing the 1.5-kilometre drilling column. The results
indicated that the seal was not secure and removing the column might
result in a catastrophic blowout. So why were the signs ignored?

Disaster analyst Andrew Hopkins of the Australian National
University in Canberra says the workers viewed the test as a means
of confirming that the well was sealed, not finding out whether it
was or not. When the test failed, workers explained it away using
the “bladder effect”, which attributes elevated pressure in a
drilling pipe to a flexing rubber seal rather than rising oil and
gas. The effect was subsequently dismissed as a plausible
explanation by an independent inquiry into the spill commissioned by
President Barack Obama.

The rig workers’ reluctance to take their test result at face value
is nothing unusual. Most of us have trouble believing evidence that
contradicts our preconceptions. Psychologists call this confirmation
bias.

Where does it come from? Michael Frank, a neuroscientist at Brown
University in Providence, Rhode Island, says the bias may have a
physical basis in the neurotransmitter dopamine, which acts as a
reward signal in the brain. Acting on the prefrontal cortex, it
inclines us to ignore evidence that challenges long-held views,
keeping us from having to constantly revise the mental shorthand we
use to understand the world. In another part of the brain, the
striatum, dopamine has the opposite effect: its level spikes in
response to novel information, and that makes us more likely to be
open to these details.

“Dopamine’s action in the brain inclines us to ignore contrary
evidence”

In most of us, the net result of the two effects is to favour
long-held beliefs. But Frank has carried out experiments showing
that some people have a gene that causes dopamine to be broken down
more quickly in the striatum. That means they get a bigger dopamine
hit from new facts, rendering them less susceptible to confirmation
bias.

Would it be worth genetically screening employees to find those
individuals best at making decisions in high-risk situations? It
probably isn’t a good idea – at least not yet, says Frank. A single
gene can’t predict the range of behaviours people might exhibit,
whether under pressure or not.

Yet there are things we can do to help cut out confirmation bias in
critical situations, says Hopkins. For example, oil firms could
employ a “devil’s advocate” tasked to put across a counter-argument,
forcing the decision-maker to consider alternative points of view.
Joshua Howgego

Fixation error: We miss the wood for the trees

Safety checklists reduce complications and deaths in the operating
theatre (Image: Chris Ryan/plainpicture)

IN 2005, 37-year-old Elaine Bromiley went to hospital for a minor
sinus operation. When her airway became blocked, three doctors tried
to insert a tube down her throat. When that failed, they should have
performed a tracheotomy, cutting open her windpipe so that she could
breathe. Instead, the doctors kept trying to get the tube in, not
noticing that their patient was being starved of oxygen. She never
woke up.

This type of mistake – fixation error – was famously highlighted in
a 1999 experiment by psychologists Daniel Simons and Christopher
Chabris. They asked volunteers to count how many times a group of
people in a video passed a basketball between them. During the clip,
a gorilla-suited woman appeared in the frame and thumped her chest.
So absorbed were the volunteers that half didn’t even notice. “We
have a remarkably good ability to focus attention on the things we
care about or that are relevant to our task,” says Simons.

But sometimes it means we miss things. The aviation industry has
dealt with this in part by encouraging crew to communicate. If one
person misses something, the logic goes, others can point it out.

This isn’t as simple as it might seem. Before airlines introduced
this culture in the 1980s, the cockpit tended to be hierarchical,
and crew sometimes felt unable to challenge the captain when
something went wrong. Those dynamics can occur in the operating
theatre too. During Elaine Bromiley’s surgery, several nurses
noticed that she was turning blue but felt they couldn’t tell the
doctors what to do.

As a pilot himself, Bromiley’s husband Martin saw how aviation
safety practices might be useful in healthcare. He is campaigning
for the introduction of safety protocols, including checklists.

Checklists require medical teams to introduce themselves and
verbally confirm key details of the surgery they are about to
perform. One study by researchers at the University of Toronto,
Canada, of 80 surgical staff over 170 procedures showed that
checklists reduced miscommunication, the top cause of healthcare
mistakes, by two-thirds. And the move to adopt them is picking up
speed. The World Health Organization’s surgical safety checklist,
launched in 2008, is now required in UK public hospitals.

“The evidence is unequivocal that the use of safe surgery checklists
reduced complications and the potential for death by a significant
amount,” says Bromiley, who set up the non-profit Clinical Human
Factors Group to push for change. They are not a panacea, however.
“Unless people are trained to use checklists properly, the potential
for big gains is going to be much harder to achieve.” Penny Sarchet

Primal freeze: Our survival instinct is out of date

Underwater evacuation drills accustom crews to the shock of an
emergency (Image: North Sea Oil / Alamy)

FEAR evolved as a survival mechanism. When we encounter danger, our
hearts race and the stress hormone cortisol floods our system,
giving muscles access to extra energy in the form of glucose.

The trouble is that cortisol also knocks out cognitive functions
such as working memory, which allows us to process information and
make decisions, and declarative memory – our ability to recall facts
and events. In evolutionary terms, this makes sense. “When you’re
running away from a tiger, it’s not really that important to
remember how you did it,” says Sarita Robinson, a neuropsychologist
at the University of Central Lancashire in Preston, UK. But in our
complex modern world, where cognitive dexterity can be more
important to survival than physical feats, our fear response can
leave us compromised.

That doesn’t mean that we can’t perform complicated tasks under
stress. Cortisol doesn’t disable procedural memory, which allows us
to do things like walk or open a door. That’s why most of us can
automatically execute ingrained behaviours such as unbuckling a seat
belt, even when we’re afraid. Procedural memory is also what allows
highly trained pilots and firefighters to perform under difficult
conditions. “They’re not having to generate everything from first
principles in that really high-stress environment,” says Robinson.

But without that training, our cortisol-compromised mind may cause
us to freeze or engage in automatic behaviours entirely unsuited to
the situation. In experiments involving underwater helicopter
evacuation drills, Robinson found that trapped passengers attempted
to release their harness from the side as they would do with a car
seat belt, rather than from the middle, where the clasp was located.
“They know they’re wearing a harness,” she says, “but they’re not
able to construct a new behaviour in time.”

Something similar may account for what happened one September night
in 1994, when the MS Estonia, a nine-deck cruise ferry bound for
Stockholm, sunk in the stormy waters of the Baltic Sea, killing 852
of the 989 people on board. According to the official accident
report, some passengers were “petrified” as the ship listed, and
“did not react when other passengers tried to guide them, not even
when they used force or shouted at them”.

Even practising contingency routines over and over again may not
avert disaster. In a 2013 study, Steve Casner of NASA found that
Boeing 747 pilots could make critical errors when presented with
simulated emergency scenarios that differed slightly from the ones
they had encountered in training. For example, in standard simulator
training, pilots regularly practise dealing with a single engine
failure during take-off after the aircraft is already travelling at
high speed. However trainers rarely present pilots with this problem
on the first take-off of the session, Casner says. In the study, he
and his colleagues attempted to surprise half of the pilots by doing
just that. The correct action is to continue with take-off, yet 22
per cent of the pilots who were faced with the emergency at the
beginning of their session tried to abort. In a real-world
situation, this could result in the plane careering off the end of
the runway.

The solution may be to train our brains to handle the unexpected, by
incorporating more surprises like this into practice drills. It is
diffucult to do in practical terms, says Casner. Still, the US
Federal Aviation Administration has new guidelines that would
incorporate surprise events in routine pilot training by 2019. This
would make pilot preparation more realistic and effective, says
Randall Bailey of NASA Langley Research Center’s Aviation Safety
Program. “The unexpected happens every day in the real world.” Sonia
van Gilder Cooke

Outcome bias: We are seduced by success

ON 23 January 2003, a NASA flight director in Houston, Texas,
emailed astronauts on the space shuttle Columbia, notifying them
that a piece of foam insulation had ripped off the fuel tank during
take-off and struck the shuttle’s wing. “We have seen this same
phenomenon on several other flights and there is absolutely no
concern for entry,” he wrote.

Nine days later, Columbia disintegrated on re-entering the
atmosphere, destroyed by heated air that entered through the damaged
wing.

How could NASA, an organisation bristling with experts, have seen a
problem time and time again and not paid heed to it?

Our tendency to ignore warning signs is something that Robin
Dillon-Merrill at Georgetown University in Washington DC has spent
years investigating. Humans are often very bad at thinking
critically about near misses or errors, she says, so long as things
turn out well – a phenomenon known as outcome bias.

“When there are obvious things wrong, people recognise it and notice
it,” she says, “but when there are littler things wrong and people
get good outcomes anyway, over time they ignore them more and more.”
It’s only when catastrophe strikes that we suddenly wake up and
smell the coffee.

“If we get good outcomes, over time we ignore near misses, more and
more”

Why are we so easily seduced by success? In a 2012 study, Tali
Sharot of University College London and colleagues found a
correlation between our tendency towards unrealistic optimism and
dopamine levels in the brain. From an evolutionary perspective, says
Sh arot, this has probably been advantageous. “It enhances
motivation. If you think you’re more likely to succeed, you’re more
likely to explore,” she says.

As for how to manage this bias, Dillon-Merrill has a suggestion to
help us take note of negative details. “One of my colleagues at NASA
holds what he calls ‘pause and learn’ workshops,” she says. The aim
is to appraise the process before the result is known. “Because you
know when you see the outcome in the future you’ll be biased about
it.” Chris Baraniuk

Group think: We are wired to conform

An avalanche caught 16 expert skiers at Tunnel Creek, Washington in
2012 (Image: RUTH FREMSON/eyevine)

IT HAS long been known that people tend to bend their opinions
toward those of the majority. In 2011, Jamil Zaki, a psychologist at
Stanford University in California, and colleagues discovered why. It
involves the ventromedial prefrontal cortex, a part of the brain’s
reward centre that lights up when we encounter things we want, like
a chocolate bar. Zaki’s team found that it also activates when
people are told what others think. And the more this part of the
brain responds to information about group opinion, the more someone
will adjust their opinion towards the consensus.

Conformity can be useful in our day-to-day lives, letting others
serve as a guide in unfamiliar situations, says Lisa Knoll, a
neuroscientist at University College London. But it can also lead us
into danger. Earlier this year, Knoll published a study in which she
asked people to rate the riskiness of texting while crossing the
street, driving without using a seat belt and so on. After seeing a
number that supposedly represented the evaluations of others, all
the volunteers moved their ratings in the direction of the majority,
even if that meant downgrading their initial estimate of risk.

That dynamic may have been at work in February 2012, when three
members of a skiing group, including pros, sports reporters and
industry executives, died in an avalanche on a backcountry slope in
Washington state. Keith Carlsen, a ski photographer on the trip,
told The New York Times that he’d had doubts about the outing but
dismissed them: “There’s no way this entire group can make a
decision that isn’t smart.”

How can group errors be avoided? The solution is similar to that
proposed for confirmation bias: find ways to spark debate (see
“Confirmation bias”). When Zaki meets with members of his lab, he
encourages people to voice conflicting views. He also says it can be
useful to have people vote on big decisions privately rather than
voice opposition publicly. “It’s important to encourage dissent more
than anything,” he says. Aviva Rutkin

The default mode: Our minds are built to wander

EVERY driver has been there – you hit a quiet stretch of road and
your thoughts turn to dinner or an upcoming holiday. As soon as the
environment becomes predictable, safe or boring, your mind starts to
wander. “After about 15 minutes, we find it irresistible to start
thinking about something else,” says Steve Casner at NASA.

Daydreaming has been implicated in train derailments, air accidents
and, according to a 2012 study by French researchers of almost 1000
drivers, as many as half of all car crashes. When our thoughts
drift, a set of brain structures known as the default mode network
kicks into gear. Exactly what it does remains a mystery, but it
seems to play an important role in helping us organise our thoughts
and plan our futures, says Jonny Smallwood at the University of
York, UK.

However, that’s not necessarily useful while you are operating heavy
machinery. Thankfully, there are a few strategies you can use to
keep your mind on a task.

One way is to be aware of your body clock. Research suggests that
early risers pay attention for longer earlier in the day, whereas
night owls are better at staying focused in the evening. Drivers may
find that taking an unfamiliar route improves focus.

A recent study found that people driving the route they always use
inched closer to cars in front of them and were less alert to
pedestrians, effects the researchers put down to daydreaming.
Chewing gum and consuming caffeine, too, have been shown to help
people stay focused on tedious tasks. Alerting people to their
waning focus is something that researchers are also exploring. Some
car companies are moving in this direction: in June, the car maker
Jaguar announced a research project to monitor drivers’ brainwaves
for signs they are losing concentration. Jessica Hamzelou

Tech clash: We don’t speak machine

ONE of the worst friendly-fire incidents involving US troops in
Afghanistan was set off by a low battery. In 2001, a member of US
Special Forces entered the coordinates of a Taliban position into a
GPS unit and was about to relay them to a B-52 bomber when the
device’s battery died. He replaced the battery and sent the
location. What he didn’t realise was that on restart, the device had
automatically reset the coordinates to its own position. A
900-kilogram bomb homed in on the American command post, killing him
and seven others.

In an increasingly automated world, misunderstandings between human
and machine are an urgent issue, says Sarah Sharples, a researcher
at the University of Nottingham, UK. Part of the challenge is making
it easy for humans to grasp what computers and devices are up to –
in other words, presenting information clearly. The GPS unit
involved was criticised for its poor user interface, with soldiers
saying its readings were easy to confuse in the fog of war.

Technological confusion has contributed to other major accidents.
When a Turkish Airlines aircraft crashed on approach to Schiphol
airport in Amsterdam, the Netherlands, in 2009, a faulty altimeter
made the flight computer slow the plane down as if it were about to
touch down. In fact it was over 1000 feet up. The first indication
of this “autothrottle” mode was a small word that appeared on the
flight display, “RETARD”, which the cockpit crew, busy with other
tasks, could not have been expected to notice.

In 2013, Asiana Airlines flight 214 crashed on approach to San
Francisco, in part because the flight crew did not know how the
plane’s complex computer would behave in certain flight modes.

Part of the difficulty, says Michael Feary, a research psychologist
at NASA, is the use of “engineer-speak” in flight computer displays.
“We need to improve the interfaces to better communicate the complex
systems on modern airplanes.”

It’s a problem that we are just beginning to understand and tackle.
Nadine Sarter at the University of Michigan in Ann Arbor and
colleagues at technology company Alion, for instance, have worked on
a NASA-funded tool that tries to find flaws in proposed cockpit
designs. The software, called ADAT, checks a number of details
including whether crucial flight information is presented clearly.
“We’re trying to use everything we’ve learned in the past,” says
Sarter, “and hopefully prevent accidents rather than explain them
after the fact.” Sonia van Gilder Cooke

By Joshua Howgego
By Penny Sarchet
By Sonia van Gilder Cooke
By Chris Baraniuk
By Aviva Rutkin
By Jessica Hamzelou

Leader: We should accept that accidents will happen
https://www.newscientist.com/article/mg22730343-000-we-should-accept-that-accidents-will-happen/
12 August 2015

It’s getting ever easier to pin the blame for every accident on
somebody, somewhere. We should resist that urge
We should accept that accidents will happen

NAPOLEON, one of the greatest forward thinkers the world has ever
known, said there was no such thing as an accident, only a failure
to recognise the hand of fate. But while he might have lived by that
maxim, society doesn’t have much time for it now.

Consider traffic accidents, the commonest of potentially serious
mishaps. Nowadays, they are often euphemistically branded as
“incidents”, and copious research is under way to identify factors
implicated in a crash, from car colour (silver is said to be safest)
to phone usage and even parasites that can alter drivers’ behaviour.

Our responses have been just as disparate, ranging from revised laws
to redesigned dashboards. (No one has proposed mandatory screening
for parasites – yet.) Minor, and deeply human, errors of judgement
are often at the root of catastrophic failures (see “7 mind slips
that cause catastrophe – and how we can avoid them”), so we are
increasingly using automation – self-driving cars, for example – to
take our error-prone selves out of the loop.

That won’t stop the blame game. Someone, somewhere, can always be
blamed: if not the users of automated systems, then their
manufacturers, programmers or those who maintain the networks they
often rely on. Increasingly omnipresent sensors allow for minutely
detailed assessments of responsibility. Left to the lawyers and
insurers, there might soon be no blame-free “accidents”at all.

This is unfamiliar territory. Existing laws cover some of the issues
that arise (14 September 2013, page 40), but we can expect some
perplexing cases to come before the courts. As they do, we should
remember that pointing the finger isn’t always productive: it can
lead to defensiveness that stymies change, and hamper attempts to
improve safety.

This has been recognised by the law for more than a century. In
1884, Prussian Chancellor Otto von Bismarck – an improbable reformer
– introduced “no-fault” settlements, allowing workers to be
compensated for often novel industrial injuries without having to
demonstrate their employers’ negligence. No-fault is still being
built into law today: Scotland is contemplating it for medical
negligence claims.

The trouble is that no-fault goes against our social instinct to
seek out causes and allocate blame. This has generally served us
well. Without it, we would live in a much more dangerous world than
we do. But in chasing down blame, we should recall that a propensity
for error is the flipside of the capacity to take risks. And
risk-taking is a vital component of any conception of progress.

“In chasing down blame, we should recall that error is the flipside
of taking risks and thus part of progress”

Much of the time, humans are driven by goals other than safety,
which is added as an afterthought or at best a counterweight. When
the balance shifts too far, derision of intrusive “nanny states” or
overweening “health and safety” regimes is the inevitable result.

So despite what technocrats might hope, we won’t ever wipe out
accidents. “It will become next to impossible to contract disease
germs or get hurt in the city,” Nikola Tesla predicted in 1915. He
was wrong. The risks he knew were simply replaced by new ones.

To err is human, to forgive divine. Our secular society may have no
more time for divinity than for the Napoleonic hand of fate, and
recklessness should of course be penalised. But we shouldn’t punish
every trace of blame just because we can. As machines take over from
humans, we must strike a balance between learning from their errors
and prosecuting the humans who make and run them. That won’t happen
by accident.
_______________________________________________
tt mailing list
tt@postbiota.org
http://postbiota.org/mailman/listinfo/tt

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: