The Stanford Center for Law and Biosciences has decided to leave the WordPress servers for greener pastures: namely, the Stanford Law School blog aggregator.
This address will no longer be updated. All posts from this address have been migrated to the new address:
Please update your bookmarks and RSS feeds accordingly.
I was privileged to give a talk last night at UC Irvine, as part of the Forensic Science Series sponsored by the Newkirk Center for Science and Society and the Center for Psychology and Law. The talk gave a broad overview of the emerging field of “neurolaw,” which I tried to define as an emerging field of study with potential for cautious application to the law. What do I think are the problems in “neurolaw”? Simply put as follows: the science isn’t perfect – particularly in terms of translation from research to forensic value – but that we tend to think it is. I think there is potential for real advances to come from neuroscience that can change aspects of the legal system for the better, but I also think that eager, early, and inappropriate adoption risk damaging the field irreparably and undermining public and judicial trust in any value of neuroscience.
The research to forensic translation is critical for neuroscience as evidence, particularly because I think it may be very vulnerable to a) confirmatory bias in experimental design and b) extremely clever legal arguments. For example, one side offering a brain scan as evidence could cite the huge number of published research studies using the technology as proof that it is generally accepted in the field and thought to be highly accurate and reliable. To someone who is not a researcher in the field, this sounds pretty impressive and convincing. Of course, those who work in the field know that there are huge problems in comparing individuals to group scans, as well as many factors that should limit the forensic inferences that can be reasonably drawn from a single functional scan (Bayesian, neurophysiological limitations, ruling in or out alternative causes, individual variability in functional architecture, and so forth).
In the discussion period, I heard several stories from practicing lawyers about the brain scans they’ve brought to and responded to in the courtroom. Coming from a research background myself, it is fascinating – and sometimes worrying – to see how the (ideally) objective science and the research community may potentially be manipulated in the adversarial process of a trial.
The talk started with a reference to last week’s (9/15) New York Times article reporting on an Indian case where a woman was convicted of poisioning her former fiance at least partly on the basis of an EEG assessment that proved her “experiential knowledge” of the crime. We are trying to get a hold of the judge’s opinion from that case, which was reported to have 9 pages dedicated to the defense of the technology. What this should remind us of is that “neurolaw” is already being applied to invesigation and courtroom phases of the legal system. If serious mistakes are made, the consequences could be grave both for the people whose lives may be harmed, and for the credibility of the field itself.