Technology transfer offices get excited about licensing patents that purport to use brain scans for lie detection.
There was a notable charity element in a recent feature story in the New Yorker (Margaret Talbot) that took a skeptical look at entrepreneurs promoting a new generation of lie detector technology based on brain scans using functional magnetic resonance imaging or fMRI.
A start-up company with the delightful name of No Lie MRI, founded by Joel Huizenga, uses the technology based on research conducted at the University of Pennsylvania (EIN 23-1352685 Form 990) by psychiatrist Daniel Langleben. Scaning the brains of people who were instructed to lie about the identity of a playing card they were given, Dr. Langleben found increased activity in three parts of the brain and one study successfully identified the liars seventy-five percent of the time.
What is fascinating about this is that the research became the basis for a patent owned by the University of Pennsylvania. Dr. Langleben himself admitted that the results were far from conclusive, but the article goes on:
Nevertheless, the University of Pennsylvania licensed the pending patents on his research to No Lie in 2003, in exchange for an equity position in the company. Langleben didn’t protest. As he explained to me, “It’s good for your résumé. We’re encouraged to have, as part of our portfolio, industry collaborations.” He went on, “I was trying to be a good boy. I had an idea. I went to the Center of Technology Transfer and asked them, ‘Do you like this?’ They said, ‘Yeah, we like that.’ ”
There was a similar instance with the Medical University of South Carolina, a state university that operates a 501(c)(3) technology transfer subsidiary, MUSC Foundation for Research & Development (EIN 57-1031624 Form 990). There, Andrew Kozel and Mark George did their own study in which subjects were instructed to lie. They also found brain activity associated with lying, but not in the same areas of the brain.
Much of the article deals with the history of lie detection technology. The idea of lie detection is appealing, but so far technology hasn't achieved high accuracy—and the impact of false results on people's lives can be enormous. What's troublesome is that in the push for earned income from licensing, universities could be encouraging more research into popular but suspect science.
The reporter claimed that bioethicists could be making matters worse by discussing the impact of new lie detection technology as though it were a done deal. I suspect she is alluding to the March-April 2005 issue of the American Journal of Bioethics, with had neuroethics as its theme. A glance at the table of contents shows that there was plenty of skepticism expressed in that forum, including articles titles like: Neuroimaging: Revolutionary Research Tool or a Post-Modern Phrenology?
But what struck me in this issue was an article that included Dr. Langleben as a co-author (with Paul Root Wolpe and Kenneth R. Foster) titled Emerging Neurotechnologies for Lie-Detection: Promises and Perils. The conclusion of that article included this observation:
Premature commercialization will bias and stifle the extensive basic research that still remains
to be done, damage the long-term applied potential of these powerful techniques, and lead to their misuse before they are ready to serve the needs of society.
It's difficult for me to square this comment with the University of Pennsylvania's willingness to license the patents in exchange for an equity interest in a company called No Lie MRI.