On June 12, 2008, 24-year-old Aditi Sharma
became the first person to be convicted of murder based on a brain scan. The
prosecution alleged that the MBA student had organised a tryst with her former
fiancé at a McDonald’s in the Indian city of Pune. There she had given him
sweets laced with arsenic.
Ms Sharma protested that she was innocent
and the police gave her a novel chance to prove it. She agreed to have an electroencephalogram
which would be analysed by software developed by Gujarati neuroscientist
Champadi Raman Mukundan. He calls it a Brain Electrical Oscillations Signature
test (BEOS).
Ms Sharma’s role in the procedure was entirely
passive. She was seated in a chair. Electrodes were attached to her scalp. Then
details of the crime were read out to her. These sparked electrical signals and
to the analyst, it looked as though regions of her brain had lit up like a
Christmas tree.
The prosecutor successfully argued that BEOS
analysis proved that she clearly had “experiential knowledge” of the
murder. In other words, certain sections of Ms Sharma’s brain would not have
lit up unless she had actually participated in the crime. Even if it seemed
like a leap of faith in an unfamiliar technology, Judge Shalini
Phansalkar-Joshi stated that the expertise of the BEOS operator “can in no way
be challenged”. He sentenced the young woman to life imprisonment.
The case is still not settled. In September
of the same year India’s National Institute of Mental Health and Neuro Sciences
declared that brain scans were unreliable in criminal cases. Ms Sharma
thereupon appealed to the high court, complaining that her conviction had been
based on “bad science”. She was released on bail — although it may be years
before her case is reviewed.
Neuroscientists and bioethicists elsewhere
were horrified that this new technology had been accepted as incontrovertible
evidence. Someday we may invent a perfect lie detector, Hank Greely, a Stanford
University expert on neurolaw, told the International Herald Tribune. “But we
need to demand the highest standards of proof before we ruin people’s lives
based on its application.”
Transforming the law
While it seems unlikely that American
judges will accept the pontifications of neuroscientists as readily as Mr Phansalkar-Joshi
did, there is no doubt that American neuroscientists think that their insights
will eventually transform the law. Michael Gazzinga, one of the leading
American neuroscientists, has even declared that someday it will “dominate the
entire legal system”.
Many neuroscientists today are resolutely
determinist in their outlook. In their view, the mind is the brain and the
brain is the mind. All of our ideas have a physical explanation, even those
which apparently transcend matter, like fairness, altruism, love, beauty, God
and of course, free will.
As a leading American biologist, Anthony
Cashmore, argued recently, “Progress in understanding the chemical basis of
behavior will make it increasingly untenable to retain a belief in the concept
of free will. To retain any degree of reality, the criminal justice system will
need to adjust accordingly.”
Functional magnetic resonance imaging (fMRI)
lie detectors are the leading edge of this adjustment.
Currently the criminal law distinguishes
between people who have caused harm deliberately (first degree murder) from
those who caused it inadvertently (involuntary manslaughter). But research in
cognitive neuroscience suggests that there may be no essential difference
because the sensation of volition happens a fraction of a second after our
brain has already determined the action. In other words, our brain is deciding,
not us.
According to many specialists in the
burgeoning field of neurolaw, the clash with the traditional understanding of
innocence and guilt means that criminal law must be “radically
reconceptualised”. The first step towards this transformation is far greater
confidence that brain scans reveal our innermost thoughts. Determination of guilt
and innocence is not the only issue which could be decided with a brain scan. Vast
new areas could open up within the legal system.
·
* Civil law suits which turn upon
mental capacity could be settled with objective criteria. A lawyer could prove
that his client entered into a contract too difficult for him to understand.
* People suing for compensation
could give verifiable proof that they are suffering from bad backs or
psychological trauma.
* Parole boards could be
reassured that prisoners have been rehabilitated. Brain scans could verify that
sex offenders no longer pose a threat to the community.
* Criminals could present them to
mitigate the severity of their sentences because of brain abnormalities.
* Defence attorneys could request
that prospective jurors be scanned to detect whether they harbour prejudices
against a client, or whether they are more likely insist upon a harsh sentence.
At the moment, the main gateway for
neurolawyers to peer into a client’s mind is an fMRI machine. This is a
gigantic – and expensive — metal doughnut which yields images of the brain
quickly and with a high degree of spatial and temporal accuracy.
Judges are sceptical
Two companies are currently marketing fMRI scans
for lie detection in the US, No Lie MRI, in California, and Cephos Corporation,
in Massachusetts. On its website, No Lie MRI estimates that the market for
accurate truth verification is at least UD$3.6 billion. But because fMRI scans
are far more accurate than old-fashioned polygraphs, with their quivering
needles tracing a graph of a subject’s perspiration and pulse, their financial
potential is far greater.
The only hitch is that US courts have so
far declined to admit fMRI scans as evidence in trials. However, defence
lawyers are constantly probing the system. As the technology improves, sooner
or later a judge will accept them.
The latest high-profile American attempt
concluded in June in Tennessee where Lorne Semrau, the CEO of two nursing homes
accused of rorting Medicare, is pleading that he had acted in good faith. He
submitted brain scans taken by Cephos to demonstrate his sincerity. The judge dismissed
them as evidence but, significantly, he added that “in the future, should
fMRI-based lie detection undergo further testing, development, and peer review,
improve upon standards controlling the technique’s operation, and gain
acceptance by the scientific community for use in the real world, this
methodology may be found to be admissible”.
Broadly speaking, there are two classes of
critics of neurolaw. The first accepts that fMRI technology will be extremely
useful in assessing guilt or innocence – but not yet.
Theoretically, fMRI scans are far more
accurate than old-fashioned and discredited polygraphs because they are measuring
truthfulness itself not anxiety about being accused of deception. Telling the
truth comes automatically, but lying requires an executive decision to withhold
a truthful response. What the fMRI does is measure changes in the magnetic
properties of oxygen-grabbing haemoglobin molecules in red blood cells. When a
subject tells a lie, the scan captures the presence of greater activity in
regions of the brain which have been linked to deception.
However, there is more to thinking than
haemoglobin. Those brightly coloured images in fMRI scans are not photographs
of the state of the brain but composite statistical representations distilled
from recordings taken seconds apart. Their accuracy depends completely upon how
well the experiment was planned and executed.
A brain telling a lie can only be detected
if it is compared with how an average brain tells the truth. Even if we know
what an average brain looks like, it is hard know whether or not this particular
brain is an outlier – a truth-telling brain affected by prescription drugs or a
genetic anomaly, for instance. The scans are so complex that they require
experienced judgment to assess them properly. And judgement can involve bias.
Furthermore, can scans give accurate
testimony about past mental states? If, for instance, a criminal asks for his
sentence to be mitigated because of a brain abnormality, it is nearly
impossible to assert that this existed when he committed the crime months or even
years ago. Brains change with time and experience.
In the Semrau case, the judge noted yet
another hurdle. “While it is unclear from the testimony what the error rates
are or how valid they may be in the laboratory setting, there are no known
error rates for fMRI-based lie detection outside the laboratory setting, ie, in
the ‘real-world’ or ‘real-life’ setting,” he wrote. There is a world of
difference between telling a white lie in a psychologist’s laboratory and
telling a real lie about a murder.
Voodoo science?
Yet another challenge comes from other
scientists. Early in 2009 many neuroscientists were infuriated by an incendiary
paper in the journal Perspectives on Psychological Science which suggested that
most inferences drawn from fMRI scans were basically flimflam. The author, Ed
Vul, a postgraduate student at the Massachusetts Institute of Technology,
pulled no punches in “Voodoo Correlations in Social Neuroscience” – although he
later gave it a blander title.
As a statistician, he contended that “a
disturbingly large, and quite prominent, segment of fMRI research on emotion,
personality, and social cognition is using seriously defective research methods
and producing a profusion of numbers that should not be believed.”
Is it fair to use brain scans to send
people to jail if fundamental issues like these are still being debated?
Vul’s scepticism supports the other school
of critics of neurolaw. How do we know that those seductively coloured images
of the brain represent the mind of the person who is being monitored? In other
words, aren’t the brain and the mind distinct?
If they are the same, then the mind is only
an immensely complex physical system – a machine, basically. All of our actions
ultimately have a physical explanation. If this is the case, doesn’t the
traditional concept of criminal responsibility change when prisoners start
pleading that “my brain made me do it”.
However, this debate has been going on for
2,500 years and the independent existence of the mind — and of free will —
still has robust champions. Raymond Tallis, for instance, a British doctor and
philosopher, argues that neurolaw is a worrying development.
“Our knowledge of the relationship between
brain and consciousness, brain and self, and brain and agency is so weak and so
conceptually confused that the appeal to neuroscience in the law courts, the
police station or anywhere else is premature and usually inappropriate. And, I
would suggest, it will remain both premature and inappropriate. Neurolaw is
just another branch of neuromythology,” he wrote in the London Times.
Outside the law
At the moment, though, the determinists are
the most prominent players in neurolaw. Well-endowed centres for law and
neuroscience are springing up all over the US in major universities.
Moreover, the brain scan lie detection business
has plenty of potential clients outside the court system. No Lie MRI, for
instance, says that it is potentially a “substitute for drug screenings, resume
validation, and security background checks” for corporations. Clients who once
would have hired a private eye could use it for “risk reduction in dating, trust
issues in interpersonal relationships and issues concerning the underlying
topics of sex, power, and money”.
And the spooky people whose business is
secrets, the intelligence community, are also eyeing fMRI technology. One
bioethicist has pointed out that the Department of Defense helped to fund
groundbreaking research in fMRI lie detection at the University of Pennsylvania.
Jonathan Marks, of Pennsylvania State University, has already expressed his
concern that interrogators will use brain scans as a means of selecting
suspects for more aggressive interrogations.
“There is a profound risk,” he writes,
“that that intelligence personnel will be seduced by the glamour of fMRI and
its flashy images, and that they will overlook the limitations of the
technology… the subjectivity of interpretation, and the complexity of brain
function outside the realm of playing cards and controlled studies.”
There are so many potential hazards with
the use and interpretation of this novel technology that Hank Greely, who has
become one of most enthusiastic advocates of neurolaw, wants government
intervention. He argues that all non-research use of lie-detection technology
should be banned until it has been approved by a government agency.
He envisages a process similar to drug
approval by the US Food and Drug Administration. “We have seen lives shattered
before, with and without these technologies,” he has written. “Requiring proof
of safety and efficacy… is a careful step towards assuring that these
technologies are used wisely.”
With Aditi Sharma’s fate in mind – a life
sentence on the basis of a brain-imaging technology which had never been
peer-reviewed or independently tested – government regulation might not be such
a bad idea.
Michael
Cook is editor of MercatorNet and a bioethics newsletter, BioEdge.