Reviewing the EHRevent report form on this day (archived here, PDF), I note the following multiple choice question on page 7 (emphasis mine):
Notwithstanding the event you are reporting, has the adoption and use of an EHR by your practice added to patient safety, improved care or improved documentation? Select one option.
o Yes, definitely
o Likely
o Not sure
o No impact
A bias and/or survey amateurism is clearly evident in this question. And perhaps something more?
Missing is this option:
o None of the above; EHR adoption did not "add to"; rather, it subtracted, as worsening occurred
A fundamental rule of surveys is that the choices should not artificially limit the survey taker's ability to provide critical or relevant information.
As Ross Koppel, PhD, a sociologist studying healthcare IT at the University of Pennsylvania stated at the Feb. 25, 2010 HHS Certification/Adoption Workgroup Meeting on Health IT Safety (minutes of that meeting are archived at this link, PDF):
... Like everyone else, I want HIT to increase patient safety, care efficiency, treatment quality, savings, and drug ordering guidance. I want HIT to provide coherent structures for test results and other data, and I wanted to provide better visualization of complex clinical data.
Unlike many of my colleagues here who are HIT scholars and advocates, however, because of my training perhaps, I‘ve studied the surveys that have been used to guide and explain the current HIT strategy.
These surveys explored why doctors and hospitals have not embraced HIT‘s benefits. The findings pointed to the cost of HIT, to the physicians‘ resistance. They called them technophobic, hide bound, and perhaps most gruesome, too old. It also talked about overwhelmed hospital IT staff and other user pathologies or user inadequacies.
Now the years of research on HIT suggested that those answers could not be complete. They weren‘t right. The reasons the surveys found this, I‘ve been investigated, was I looked at the questions that were asked of the doctors and hospitals. And the only answer options dealt with the problems of hospitals and doctors. In other words, they found only the questions that they asked about physician difficulties, doctor difficulties. They didn‘t look for any other options, and they didn‘t even give other option answers to talk about the following issues.
So they could have asked, does HIT slow or speed your clinical work? Are EHR data presented in helpful ways, or do they generate unnecessary cognitive burdens because, for example, the data that should be contiguous are in five separate screens, where you‘re scrolling across vast wastelands of rows and columns looking for the needed information. How many information displays are understandable or are disarticulated, confusing, or missing key data? Does HIT distract from your patient care or improve it? And the last one I‘ll ask, although I have about 50 others, how responsive are HIT vendors to acknowledging and repairing defects? How quickly are these defects repaired?
The absence of the relevant questions divert us from understanding the actual HIT needs of clinicians and patients. Now I‘m certain that the people who asked these questions, designed these surveys, were not intentionally deceptive. Well, I‘m certain most were not intentionally deceptive. But the restricted options reflects a series of assumptions or … that says HIT is intrinsically beneficial. Anything that encourages HIT is good for patient safety. Anything that discourages it or retards it is bad by definition.
The creators of the EHRevent survey apparently were not present, or not listening, during Dr. Koppel's presentation.
I have not yet reviewed in depth the other survey questions for similar issues, but this one stood out like a sore thumb.
I cannot imagine an objective, competent scientist committing such an error.
-- SS