Earlier this morning I received an email from Christine Maynard, a writer and consultant from Natchez, Louisiana. I'm not sure if she heard me on the radio or came across crowdsourcing in some other way, but she raises an issue I've been wanting to dig into ever since publishing the original crowdsourcing article over two years ago: To what degree could the crowd lend its brainpower to the process of, say, diagnosing or suggesting treatments for diseases?
One is tempted to dismiss this notion out of hand. Let the crowd count birds, develop cell phone applications, even create a new restaurant. But keep them away from my MRI scans. This is a pretty understandable reaction, and I've tended to view attempts to crowdsource the professions (law and medicine, namely) with considerable skepticism. But given the poor quality of much of the healthcare in the United States, to say nothing of the developing world, could the crowd really make so big a mess of things? Here's Maynard:
Crowdsourcing would work for taking CT and MRI reading out of the hands of [radiologists] and into the hands of individuals who have become medical experts in pursuit of their own elusive diagnoses. Radiologists read scans in a couple of minutes, referring to any previous scans in hospital records which are pulled up as soon as social is entered, and simply note differences, if any. They have clues, from doctors "reason for request" which is their loophole to NOT closely examine a scan.
The technology is wonderful. The vast majority of specialists refuse to do "exploratory" despite their patients desperate attempts to get relief from agony and, hopefully, live, because of the state of the art. "If there was a problem, we would have seen it," they say. Yet failure to find a problem, or five different interpretations, none hitting the mark, is the norm, resulting in no treatment or mistreatment.
The missing ingredient is better interpretation of diagnostics, which could be accomplished by crowdsourcing. Ideally the demographic without borders would be any reasonably intelligent person who has been on a medical merry go round with no help proffered, and who has resorted to examining and interpreting, for months, maybe years, their own scans.
I don't know if missing the mark exactly the norm, but it's certainly a frequent occurrence. (Full disclosure: Our son had had two MRIs by the age of six months. The readings, by separate radiologists, contradicted each other and neither was conclusive). Without going into the personal details of her story, Maynard has endured an extraordinary range of interpretations, as she says.
While I would never advocate going to the masses for radiology (this would, clearly, be ridiculous), this isn't quite what Maynard implies. Why not have an informed group of people—some might even be radiologists, or MDs in other fields—reviewing scans that were posted online. I fear that Maynard is right that overworked professionals have little time and little incentive to pour carefully over MRI and CT scans. If this particular crowd of semi-professionals—what Scott Page, the U. Michigan collective intelligence brillionaire I lean on in the crowdsourcing book, might call a "crowd of experts*"—was willing to act as a fact-check back-up resource, why not tap them?
I'm getting a lot of additional traffic today due to all those afore-posted radio gigs. So I put the question to you, and them: Would you trust the crowd to help analyze your medical data?
(Ed's Note: Written on the fly between 25 radio interviews (we had two cancellations) and one Red Sox/Yankees game. Forgive mistakes and correct me in comments.)
* Pre-correction correction: Scott actually calls this a "crowd of models," in his excellent book, The Difference, I believe. I've taken liberty with his terminology.