Category Archives: evidence

What a dead salmon reminds us about fMRI analysis

The Stanford Center for Law and Biosciences has decided to leave the WordPress servers for greener pastures: namely, the Stanford Law School blog aggregator.

This address will no longer be updated. All posts from this address have been migrated to the new address:

http://blogs.law.stanford.edu/lawandbiosciences/

Please update your bookmarks and RSS feeds accordingly.

This has been making the rounds in the neuroscience world, but deserves attention in cross-disciplinary fields.  A group of top-notch fMRI researchers presented an unusual paper at June’s Human Brain Mapping conference.

Paper titleNeural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction

Blog headline: fMRI Gets Slap in the Face with a Dead Fish

Salmons have very small brains.

Salmons have very small brains.

In short, researchers scanned a dead fish while it was “shown a series of photographs depicting human individuals in social situations. The salmon was asked to determine what emotion the individual in the photo must have been experiencing.”

Clearly, the fish did not perform well at the task, and thus we have not learned much about interspecies perspective taking.  The work is, however, a compelling and humorous demonstration of the problem of multiple comparisons.  This is a principle in statistics that basically says when you’re looking at enough bits of information (i.e. doing lots of statistical tests), some will seem to be what you’re looking for – purely by chance.  In fMRI experiments, there are a LOT of pieces of data to compare, and without statistical correction for this phenomenon (which is not always done), some will indeed be significant, just by chance.

Lead author Craig Bennett explains further on his blog:

In early 2008 I was working with my co-adviser George Wolford on a presentation he was giving regarding the multiple comparisons problem in fMRI. We were discussing false positives in MRI phantom data and I brought up the idea of processing the salmon fMRI data to look for some ‘active’ voxels. I ran the fish data through my SPM processing pipelines and couldn’t believe what I saw. Sure, there were some false positives. Just about any volume with 65,000 voxels is going to have some false positives with uncorrected statistics. Rather, it was where the false positives occurred that really floored me. A cluster of three significant voxels were arranged together right along the midline of the salmon’s brain.

Remember that the fish was dead.  There was surely no BOLD signal changes going on in a dead fish’s brain.  This is likely not a physiological artifact; it is a statistical one.  Furthermore, the voxels were clustered together – something that may be expected to happen in an “actual” activation and thus used as a threshold for analysis.  Also, it was just one fish!  (No apparent speculation in the paper about what may have happened if this were a school of fish compared to appropriate control school of fish.)

Bennett et al are apparently having a hard time getting the paper published.  The use of multiple comparisons corrections in fMRI studies is a contentious one, as some researchers think it may be overly conservative and thus miss true positives.  As a solution, Bennett suggests reporting both sets of data, corrected and uncorrected.

The moral of the story for interdisciplinary folks: note whether multiple comparisons correction data have been reported (or not).  And always bear in mind that there are a lot of assumptions and decisions being made behind the ultimately reported data in any neuroimaging study.

- Emily Murphy (h/t Alexis Madrigal @ Wired)

Update on Indian BEOS case: Accused released on bail

The Stanford Center for Law and Biosciences has decided to leave the WordPress servers for greener pastures: namely, the Stanford Law School blog aggregator.

This address will no longer be updated. All posts from this address have been migrated to the new address:

http://blogs.law.stanford.edu/lawandbiosciences/

Please update your bookmarks and RSS feeds accordingly.

We wrote in December about the murder trial in India that relied heavily on Brain Electrical Oscillations Signature (BEOS) test to prove that Aditi Sharma had “experiential knowledge” of the poisoning of her former fiance, Udit Bharati.  Aditi and her husband, Pravin Khandelwal, were sentenced to life in prison. The original opinion, which we believe contains many serious flaws, is available at the original post.

We recently learned, courtesy of some research by Rajat Rana (thanks to Vinita Kailasanath!), that Aditi and Pravin have been granted bail by the Bombay High Court (documents: Aditi’s bail order and Pravin’s bail order).  Pravin’s sentence was suspended on the grounds that there was no real evidence to tie him to the case as a conspirator. Aditi was released based on the fact that the evidence of her possessing the arsenic-laced prasad was not compelling, and indeed “the possibility of plantation cannot not be ruled out” (sic). The BEOS evidence is not mentioned in either brief.

Watch this space for further news and a complete analysis.

- Emily Murphy

No Lie MRI being offered as evidence in court

The Stanford Center for Law and Biosciences has decided to leave the WordPress servers for greener pastures: namely, the Stanford Law School blog aggregator.

This address will no longer be updated. All posts from this address have been migrated to the new address:

http://blogs.law.stanford.edu/lawandbiosciences/

Please update your bookmarks and RSS feeds accordingly.

It has come to our attention that No Lie MRI has produced a report that is presently being offered as evidence in a court in Southern California.  A hearing about the admissibility of this evidence is imminent.

The case is a child protection hearing being conducted in the juvenile court.  In brief, and because the details of the case are sealed and of a sensitive nature, the issue is whether a minor has suffered sexual abuse at the hands of a custodial parent and should remain removed from the home.  The parent has contracted No Lie MRI and apparently undergone a brain scan.  The No Lie MRI-produced report reads in part as follows:

sanitized1

The defense plans to claim the fMRI-based lie detection (or “truth verification”) technology is accurate and generally accepted within the relevant scientific community in part by narrowly defining the relevant community as only those who research and develop fMRI-based lie detection.  [Note: California follows its own version of the Frye test of admissibility, not the current federal test under Daubert.]

Limiting the “relevant community” to only those who research and develop fMRI based lie detection is without merit, if only because such a definition precludes effective or sufficient peer-review.  Indeed, it is arguable such a narrowly-defined community has a strong incentive to exaggerate its claims of accuracy and overlook unanswered questions for financial gain if such techniques are “legally admissible.”

The few practitioners who research and develop fMRI-based deception detection are not the only qualified people to comment on the accuracy and validity of the technique.  Statisticians familiar with Bayesian analysis, cognitive neuroscientists familiar with technical and analytical constraints, and researchers working to elucidate the neural basis of memory, decision-making, and social behavior should all make up the “relevant scientific community” for such a complex and as-yet poorly characterized technology.   Further, I suspect the community of peer-reviewers that have reviewed the articles being proffered in support of the evidence of fMRI testing on deception is probably a useful proxy for the legally relevant scientific community, and extends well beyond the handful of researchers working directly on fMRI-based deception detection.

I will post again soon with more details and criticisms about the claims in the statement produced by No Lie MRI – mainly, that their external validation task was inconclusive in the individual, yet the testing proceeded with the case-related probe questions and found to be determinative that the parent was not lying about denying sexual abuse of the child.  Further, that the repetition of three critical questions (as above) four times each seems incredibly unlikely to produce sufficient power to detect a robust neural response that could be accurately classified as deceptive/non-deceptive.

Please add your own views and suggestions, and check back for updates.

- Emily Murphy