Coddling Human Guinea Pigs
Endless red tape and paternalism toward study volunteers is having a stifling effect on clinical research.
Let's agree that people who are altruistic enough to volunteer for experiments should know what they're in for—if the study is testing a drug that has harmed lab animals, for instance, or if it involves a psychological manipulation that might leave emotional scars. That's why all federally funded research on people must be vetted by panels charged with protecting "human subjects." But fretting about the privacy rights of suicide bombers?
A few years ago anthropologist Scott Atran, who has done pioneering work on what drives people to become terrorists, asked the panel at the University of Michigan, where he is a professor, to OK a study funded by the National Science Foundation in which he would interview terrorists, including jailed members of the group behind the 2005 Bali bombing. The panel balked. Prisoners are in no position to give informed consent, it said. There was grave concern that the prisoners might reveal plans that could get them in more trouble. The panel, which included a musicologist and a journalist, forbade Atran from asking the jihadis personal questions (even though a goal of the study was learning what motivates someone to become a terrorist); doing so would violate their right to privacy. Eventually Atran got a partial OK, but the experience made one thing crystal clear to him. "Most of this," he says, "is nuts."
Ask scientists why it takes so long to discover new treatments for disease, effective ways to prevent cancer or dementia, or ways to diminish the allure of jihad, and they point to two culprits (in addition to the fact that this stuff is hard). One is that "translational" research, in which fundamental biological discoveries are put to practical use, just doesn't have the sex appeal of basic science, the kind done in test tubes and lab animals and which yields fundamental new insights into, say, molecular mechanisms of disease. Most of the people who evaluate study proposals for the National Institutes of Health "are basic scientists," notes Daniel Sessler of the Cleveland Clinic. "In basic science, being cutting edge and innovative is what's valued. But in clinical research you'd never take something completely innovative and try it in people." It must have already been proven in animals. "So the answer comes back: this is not innovative enough," he says. Sessler had been stymied in his attempt to get NIH funding for a study of whether the form of anesthesia—regional or general—used during surgery affects long-term cancer survival, something hinted at in animal studies. "More animal studies won't help," he says. "The commitment from the top [of NIH to translational research] is real, but it hasn't filtered down" to scientists who evaluate grant proposals.
That's what clinical scientists, who feel like the stepchildren of biomedical research compared with the cool kids who study fruit flies, have long suspected, and last month brought quantitative proof. Analyzing 92,922 NIH grant applications from 2000 to 2004, scientists led by Theodore Kotchen of the Medical College of Wisconsin found that those for research with people got much lower scores, and thus were less likely to be funded, than those for research on cell cultures, animals and the like. "There is still the perception that clinical research doesn't have the cachet of discovering a gene," says Kotchen. But there is another factor, which brings us to the second barrier to research aimed directly at helping people. NIH reviewers look at the proposal and, knowing what they do about the human-subjects panels back at the scientists' universities, figure "this will never pass muster," says Kotchen. They give the proposal too low a score to get funded.
Many of these institutional review boards (IRBs) do an exemplary job, keeping scientists on the ethical up-and-up. Then there are those that raise concerns about the privacy rights of terrorists. This "hyperprotectionism," lamented an editorial in the Journal of the American Medical Association, "can have a stifling effect on research productivity." One measure of that might be the number of new compounds approved by the Food and Drug Administration. It averaged just over 35 in the mid-1990s, 23 from 2001 to 2004 and an abysmal 19 last year—the fewest since the early 1980s. Or you can measure it by how long it takes a clinical trial for cancer to get off the ground: 171 days of red tape, finds David Dilts of Vanderbilt University.
Outside biomedicine, you can measure the stifling effect in knowledge not gained. One psychologist who had proposed to study a treatment for phobias had a chemist and archaeologist (you don't need expertise in the subject of the proposal to serve on an IRB) tell him the research would not yield anything useful. Another IRB balked at a study of why people make false statements about something they experienced; the kids were to see a video in which a policeman misled a child about something a fireman had done, and the IRB said it is unethical to show cops in a bad light. The study was proposed by scientists whose work established that questioning children in the wrong way can lead them to make baseless accusations, as occurred in some "Satanic abuse" cases in the 1990s. Doing studies on people "is so full of red tape that even experienced researchers are increasingly reluctant to tackle it," a scientist at the University of California, San Diego, told me. "It is so much simpler to deal with a mouse." But haven't we cured enough of them?