It is an interesting interview. I certainly find the recognition of need for an EHR/clinical IT problems reporting service a major cultural advancement in healthcare.
It's still unclear to me how -- and why -- this organization originated with little to no public knowledge and involvement, especially considering the organization types mentioned below that participated, and how it will function in interactions with myriad healthcare IT stakeholders.
Here's an explanation by Dr. Fotsch:
... We work with a not-for-profit board called the iHealth Alliance. They Alliance is made up of medical society executives, professional liability carriers, and liaison representatives from the FDA. They govern some of the networks that we run, and in exchange for that, help us recruit physicians. Professional liability carriers, for example, promote our services that send drug alerts to doctors because that’s good and protective from a liability standpoint.
In the course of our conversations with them roughly a year ago, when we were talking about adding some drug safety information into electronic health records, we came across the fact that there were concerns from the liability carriers that there was no central place for reporting adverse EHR events or near misses or potential problems or issues with electronic health records.
[Translation: the carriers saw their losses potentially increasing as a result of litigation arising from EHR-related lawsuits, and decided to do something proactive- ed.]
They were interested in creating a single place where they could promote to their insured physicians that they could report adverse EHR events. Then it turned out that medical societies had similar concerns.
[That must have been one of the best-kept secrets on Earth considering the promotion EHR's have received as a miracle-working technology, and the lack of expression of concerns from those societies - ed.]
Rather than have each of them create a system, the Alliance took on a role of orchestrating all of the interests, including some interest from the FDA and ONC in creating an electronic health record problem reporting system. That’s how it came into play.
Our role in it, in addition to having a seat on the iHealth Alliance board, was really in network operations — in running the servers, if you will, which didn’t seem like a very complicated task. Since business partners we rely on for our core business were interested in it, it was easy to say yes. It frankly turned out to be somewhat more complicated than we originally thought [I predict they haven't seen anything yet; wait until they get knee deep into real world EHR issues - ed.], but now it’s up and available.
While I find the recognition of need for an EHR/clinical IT reporting service a major advancement, I am nonetheless troubled by certain statements made by Dr. Fotsch. They seem at odds with the theoretic and empirical findings of medical informatics, social informatics, human-computer interaction and other fields relevant for health IT evaluation, and/or seem to demonstrate biases about HIT. My comments are in red italics:
Fotsch:
… Probably what we’re seeing more often than not, the real challenge with EHRs like any technology, turns out to be some form of user error.
[What about contributory or causative designer error? – ed.]
“I didn’t know it would do that"
[Why did the user not know? Lack of training, poor manuals, or overly complex information systems lacking informative messages and consistency of control-action relationships, as an example? -ed]
... or “I didn’t know that it pre-populated that"
[Why did it pre-populate? Was that inappropriate for the clinical context, such as in this example?]
... or “I didn’t know I shouldn’t cut and paste"
[Then why did the software designers enable cut and paste, without some informative message on overuse, such as length of text cut and pasted?– ed.]
... or “I wasn’t paying attention to this"
[Perhaps due to distractions from mission hostile user interfaces? -ed]
... or maybe the user interface was a little confusing
[What is "a little confusing?" (Is that like "A little pregnant?) And why was it confusing? User intellectual inadequacy, or software design issues leading to cognitive overload? - ed.]
Actual software errors appear to be the exception rather than the rule as it relates to EHR events.
["Actual software errors" are defined as, what, exactly--? Loss of database relational integrity as a result of a programming error, as apparently recently happened at Trinity Health, a large Catholic hospital chain as reported in HIStalk? Memory leaks from poor code? Buffer overflows? What?]
That’s at least as I understand it.
[Understand it from whom? Hopefully not from me or my extensive website on the issues, the site whose header sadly now appears as below - ed.]
A new, unfortunate header for my website on health IT failures. Click to enlarge the shaded area, or click here to go to the site.
In summary, a "blame the user" attitude seems apparent. There appears to be little acknowledgment of the concept of IT "errorgenicity" - the capacity of a badly designed or poorly implemented information system to facilitate error, and of the systemic nature of errors in complex organizations to which ill-done IT can contribute.
These are concepts understood long ago in mission critical settings, as in this mid 1980's piece from the Air Force cited in my previously-linked eight part series on mission hostile health IT:
From "GUIDELINES FOR DESIGNING USER INTERFACE SOFTWARE"
ESD-TR-86-278
August 1986
Sidney L. Smith and Jane N. Mosier
The MITRE Corporation
Prepared for Deputy Commander for Development Plans and Support Systems, Electronic Systems Division, AFSC, United States Air Force, Hanscom Air Force Base, Massachusetts.... SIGNIFICANCE OF THE USER INTERFACE
The design of user interface software is not only expensive and time-consuming, but it is also critical for effective system performance. To be sure, users can sometimes compensate for poor design with extra effort. Probably no single user interface design flaw, in itself, will cause system failure. But there is a limit to how well users can adapt to a poorly designed interface. As one deficiency is added to another, the cumulative negative effects may eventually result in system failure, poor performance, and/or user complaints.
Outright system failure can be seen in systems that are underused, where use is optional, or are abandoned entirely. There may be retention of (or reversion to) manual data handling procedures, with little use of automated capabilities. When a system fails in this way, the result is disrupted operation, wasted time, effort and money, and failure to achieve the potential benefits of automated information handling.
In a constrained environment, such as that of many military and commercial information systems, users may have little choice but to make do with whatever interface design is provided. There the symptoms of poor user interface design may appear in degraded performance. Frequent and/or serious errors in data handling may result from confusing user interface design [in medicine, this often translates to reduced safety and reduced care quality - ed.] Tedious user procedures may slow data processing, resulting in longer queues at the checkout counter, the teller's window, the visa office, the truck dock, [the hospital floor or doctor's office - ed.] or any other workplace where the potential benefits of computer support are outweighed by an unintended increase in human effort.
In situations where degradation in system performance is not so easily measured, symptoms of poor user interface design may appear as user complaints. The system may be described as hard to learn, or clumsy, tiring and slow to use [often heard in medicine, but too often blamed on "physician resistance" - ed.] The users' view of a system is conditioned chiefly by experience with its interface. If the user interface is unsatisfactory, the users' view of the system will be negative regardless of any niceties of internal computer processing.
I am not entirely happy when the CEO of an organization taking on the responsibility of being a central focus for EHR error reporting makes statements that are consistent with unfamiliarity with important HIT-relevant domains, as well as a possible pro-IT, anti-user biases.
For that reason as well as the other questions raised at my prior posts (such as the onerous legal contract and apparent lack of ability of the public to easily view the actual report texts themselves), I cannot recommend use of their site for EHR problems reporting.
I recommend the continued use of the FDA facilities until such time as a compelling argument exists to do otherwise.
-- SS
Addendum 11/28/10:
This passage ends the main essay at my site "Contemporary Issues in Medical Informatics: Common Examples of Healthcare Information Technology Difficulties" and is quite relevant here:
-- SS... An article worth reviewing is "Human error: models and management", James Reason (a fitting name!), BMJ 2000;320:768-770 (18 March), http://www.bmj.com/cgi/content/full/320/7237/768:
Summary points:
- Two approaches to the problem of human fallibility exist: the person and the system approaches
- The person approach focuses on the errors of individuals, blaming them for forgetfulness, inattention, or moral weakness
- The system approach concentrates on the conditions under which individuals work and tries to build defenses to avert errors or mitigate their effects
- High reliability organizations---which have less than their fair share of accidents---recognize that human variability is a force to harness in averting errors, but they work hard to focus that variability and are constantly preoccupied with the possibility of failure.