Ventured

Tech, Business, and Real Estate News

Why Doctors Hate Their Computers

Source: The New Yorker, Atul Gawande
Photo: Digitization promises to make medical care easier and more efficient; instead, doctors feel trapped behind their screens. (Ben Wiseman)

Digitization promises to make medical care easier and more efficient. But are screens coming between doctors and patients?

On a sunny afternoon in May, 2015, I joined a dozen other surgeons at a downtown Boston office building to begin sixteen hours of mandatory computer training. We sat in three rows, each of us parked behind a desktop computer. In one month, our daily routines would come to depend upon mastery of Epic, the new medical software system on the screens in front of us. The upgrade from our home-built software would cost the hospital system where we worked, Partners HealthCare, a staggering $1.6 billion, but it aimed to keep us technologically up to date.

More than ninety per cent of American hospitals have been computerized during the past decade, and more than half of Americans have their health information in the Epic system. Seventy thousand employees of Partners HealthCare—spread across twelve hospitals and hundreds of clinics in New England—were going to have to adopt the new software. I was in the first wave of implementation, along with eighteen thousand other doctors, nurses, pharmacists, lab techs, administrators, and the like.

The surgeons at the training session ranged in age from thirty to seventy, I estimated—about sixty per cent male, and one hundred per cent irritated at having to be there instead of seeing patients. Our trainer looked younger than any of us, maybe a few years out of college, with an early-Justin Bieber wave cut, a blue button-down shirt, and chinos. Gazing out at his sullen audience, he seemed unperturbed. I learned during the next few sessions that each instructor had developed his or her own way of dealing with the hostile rabble. One was encouraging and parental, another unsmiling and efficient. Justin Bieber took the driver’s-ed approach: You don’t want to be here; I don’t want to be here; let’s just make the best of it.

I did fine with the initial exercises, like looking up patients’ names and emergency contacts. When it came to viewing test results, though, things got complicated. There was a column of thirteen tabs on the left side of my screen, crowded with nearly identical terms: “chart review,” “results review,” “review flowsheet.” We hadn’t even started learning how to enter information, and the fields revealed by each tab came with their own tools and nuances.

But I wasn’t worried. I’d spent my life absorbing changes in computer technology, and I knew that if I pushed through the learning curve I’d eventually be doing some pretty cool things. In 1978, when I was an eighth grader in Ohio, I built my own one-kilobyte computer from a mail-order kit, learned to program in basic, and was soon playing the arcade game Pong on our black-and-white television set. The next year, I got a Commodore 64 from RadioShack and became the first kid in my school to turn in a computer-printed essay (and, shortly thereafter, the first to ask for an extension “because the computer ate my homework”). As my Epic training began, I expected my patience to be rewarded in the same way.

My hospital had, over the years, computerized many records and processes, but the new system would give us one platform for doing almost everything health professionals needed—recording and communicating our medical observations, sending prescriptions to a patient’s pharmacy, ordering tests and scans, viewing results, scheduling surgery, sending insurance bills. With Epic, paper lab-order slips, vital-signs charts, and hospital-ward records would disappear. We’d be greener, faster, better.

But three years later I’ve come to feel that a system that promised to increase my mastery over my work has, instead, increased my work’s mastery over me. I’m not the only one. A 2016 study found that physicians spent about two hours doing computer work for every hour spent face to face with a patient—whatever the brand of medical software. In the examination room, physicians devoted half of their patient time facing the screen to do electronic tasks. And these tasks were spilling over after hours. The University of Wisconsin found that the average workday for its family physicians had grown to eleven and a half hours. The result has been epidemic levels of burnout among clinicians. Forty per cent screen positive for depression, and seven per cent report suicidal thinking—almost double the rate of the general working population.

Something’s gone terribly wrong. Doctors are among the most technology-avid people in society; computerization has simplified tasks in many industries. Yet somehow we’ve reached a point where people in the medical profession actively, viscerally, volubly hate their computers.

On May 30, 2015, the Phase One Go-Live began. My hospital and clinics reduced the number of admissions and appointment slots for two weeks while the staff navigated the new system. For another two weeks, my department doubled the time allocated for appointments and procedures in order to accommodate our learning curve. This, I discovered, was the real reason the upgrade cost $1.6 billion. The software costs were under a hundred million dollars. The bulk of the expenses came from lost patient revenues and all the tech-support personnel and other people needed during the implementation phase.

In the first five weeks, the I.T. folks logged twenty-seven thousand help-desk tickets—three for every two users. Most were basic how-to questions; a few involved major technical glitches. Printing problems abounded. Many patient medications and instructions hadn’t transferred accurately from our old system. My hospital had to hire hundreds of moonlighting residents and pharmacists to double-check the medication list for every patient while technicians worked to fix the data-transfer problem.

Many of the angriest complaints, however, were due to problems rooted in what Sumit Rana, a senior vice-president at Epic, called “the Revenge of the Ancillaries.” In building a given function—say, an order form for a brain MRI—the design choices were more political than technical: administrative staff and doctors had different views about what should be included. The doctors were used to having all the votes. But Epic had arranged meetings to try to adjudicate these differences. Now the staff had a say (and sometimes the doctors didn’t even show), and they added questions that made their jobs easier but other jobs more time-consuming. Questions that doctors had routinely skipped now stopped them short, with “field required” alerts. A simple request might now involve filling out a detailed form that took away precious minutes of time with patients.

Rana said that these growing pains were predictable. The Epic people always build in a period for “optimization”—reconfiguring various functions according to feedback from users. “The first week,” he told me, “people say, ‘How am I going to get through this?’ At a year, they say, ‘I wish you could do this and that.’ ”

I saw what he meant. After six months, I’d become fairly proficient with the new software. I’d bring my laptop with me to each appointment, open but at my side. “How can I help?” I’d ask a patient. My laptop was available for checking information and tapping in occasional notes; after the consultation, I completed my office report. Some things were slower than they were with our old system, and some things had improved. From my computer, I could now remotely check the vital signs of my patients recovering from surgery in the hospital. With two clicks, I could look up patient results from outside institutions that use Epic, as many now do. For the most part, my clinical routine did not change very much.

As a surgeon, though, I spend most of my clinical time in the operating room. I wondered how my more office-bound colleagues were faring. I sought out Susan Sadoughi, whom an internist friend described to me as one of the busiest and most efficient doctors in his group. Sadoughi is a fifty-year-old primary-care physician, originally from Iran, who has worked at our hospital for twenty-four years. She’s married to a retired Boston police lieutenant and has three kids. Making time in her work and family schedule to talk to me was revealingly difficult. The only window we found was in the early morning, when we talked by phone during her commute.

Sadoughi told me that she has four patient slots per hour. If she’s seeing a new patient, or doing an annual physical, she’ll use two slots. Early on, she recognized that technology could contribute to streamlining care. She joined a committee overseeing updates of a home-built electronic-medical-record system we used to rely on, helping to customize it for the needs of her fellow primary-care physicians. When she got word of the new system, she was optimistic. Not any longer. She feels that it has made things worse for her and her patients. Before, Sadoughi almost never had to bring tasks home to finish. Now she routinely spends an hour or more on the computer after her children have gone to bed.

She gave me an example. Each patient has a “problem list” with his or her active medical issues, such as difficult-to-control diabetes, early signs of dementia, a chronic heart-valve problem. The list is intended to tell clinicians at a glance what they have to consider when seeing a patient. Sadoughi used to keep the list carefully updated—deleting problems that were no longer relevant, adding details about ones that were. But now everyone across the organization can modify the list, and, she said, “it has become utterly useless.” Three people will list the same diagnosis three different ways. Or an orthopedist will list the same generic symptom for every patient (“pain in leg”), which is sufficient for billing purposes but not useful to colleagues who need to know the specific diagnosis (e.g., “osteoarthritis in the right knee”). Or someone will add “anemia” to the problem list but not have the expertise to record the relevant details; Sadoughi needs to know that it’s “anemia due to iron deficiency, last colonoscopy 2017.” The problem lists have become a hoarder’s stash.

“They’re long, they’re deficient, they’re redundant,” she said. “Now I come to look at a patient, I pull up the problem list, and it means nothing. I have to go read through their past notes, especially if I’m doing urgent care,” where she’s usually meeting someone for the first time. And piecing together what’s important about the patient’s history is at times actually harder than when she had to leaf through a sheaf of paper records. Doctors’ handwritten notes were brief and to the point. With computers, however, the shortcut is to paste in whole blocks of information—an entire two-page imaging report, say—rather than selecting the relevant details. The next doctor must hunt through several pages to find what really matters. Multiply that by twenty-some patients a day, and you can see Sadoughi’s problem.

The software “has created this massive monster of incomprehensibility,” she said, her voice rising. Before she even sets eyes upon a patient, she is already squeezed for time. And at each step along the way the complexity mounts.

“Ordering a mammogram used to be one click,” she said. “Now I spend three extra clicks to put in a diagnosis. When I do a Pap smear, I have eleven clicks. It’s ‘Oh, who did it?’ Why not, by default, think that I did it?” She was almost shouting now. “I’m the one putting the order in. Why is it asking me what date, if the patient is in the office today? When do you think this actually happened? It is incredible!” The Revenge of the Ancillaries, I thought.

“Don’t judge my client by the covers of the books he reads.”

She continued rattling off examples like these. “Most days, I will have done only around thirty to sixty per cent of my notes by the end of the day,” she said. The rest came after hours. Spending the extra time didn’t anger her. The pointlessness of it did.

Difficulties with computers in the workplace are not unique to medicine. Matt Spencer is a British anthropologist who studies scientists instead of civilizations. After spending eighteen months embedded with a group of researchers studying fluid dynamics at Imperial College London, he made a set of observations about the painful evolution of humans’ relationship with software in a 2015 paper entitled “Brittleness and Bureaucracy.”

Years before, a graduate student had written a program, called Fluidity, that allowed the research group to run computer simulations of small-scale fluid dynamics—specifically, ones related to the challenge of safely transporting radioactive materials for nuclear reactors. The program was elegant and powerful, and other researchers were soon applying it to a wide range of other problems. They regularly added new features to it, and, over time, the program expanded to more than a million lines of code, in multiple computer languages. Every small change produced unforeseen bugs. As the software grew more complex, the code became more brittle—more apt to malfunction or to crash.

The I.B.M. software engineer Frederick Brooks, in his classic 1975 book, “The Mythical Man-Month,” called this final state the Tar Pit. There is, he said, a predictable progression from a cool program (built, say, by a few nerds for a few of their nerd friends) to a bigger, less cool program product (to deliver the same function to more people, with different computer systems and different levels of ability) to an even bigger, very uncool program system (for even more people, with many different needs in many kinds of work).

Spencer plotted the human reaction that accompanied this progression. People initially embraced new programs and new capabilities with joy, then came to depend on them, then found themselves subject to a system that controlled their lives. At that point, they could either submit or rebel. The scientists in London rebelled. “They were sick of results that they had gotten one week no longer being reproducible a week later,” Spencer wrote. They insisted that the group spend a year rewriting the code from scratch. And yet, after the rewrite, the bureaucratic shackles remained.

As a program adapts and serves more people and more functions, it naturally requires tighter regulation. Software systems govern how we interact as groups, and that makes them unavoidably bureaucratic in nature. There will always be those who want to maintain the system and those who want to push the system’s boundaries. Conservatives and liberals emerge.

Scientists now talked of “old Fluidity,” the smaller program with fewer collaborators which left scientists free to develop their own idiosyncratic styles of research, and “new Fluidity,” which had many more users and was, accordingly, more rule-bound. Changes required committees, negotiations, unsatisfactory split-the-difference solutions. Many scientists complained to Spencer in the way that doctors do—they were spending so much time on the requirements of the software that they were losing time for actual research. “I just want to do science!” one scientist lamented.

Yet none could point to a better way. “While interviewees would make their resistance known to me,” Spencer wrote, “none of them went so far as to claim that Fluidity could be better run in a different manner.” New Fluidity had capabilities that no small, personalized system could ever provide and that the scientists couldn’t replace.

The Tar Pit has trapped a great many of us: clinicians, scientists, police, salespeople—all of us hunched over our screens, spending more time dealing with constraints on how we do our jobs and less time simply doing them. And the only choice we seem to have is to adapt to this reality or become crushed by it.

Many have been crushed. The Berkeley psychologist Christina Maslach has spent years studying the phenomenon of occupational burnout. She focussed on health care early on, drawn by the demanding nature of working with the sick. She defined burnout as a combination of three distinct feelings: emotional exhaustion, depersonalization (a cynical, instrumental attitude toward others), and a sense of personal ineffectiveness. The opposite, a feeling of deep engagement in one’s work, came from a sense of energy, personal involvement, and efficacy. She and her colleagues developed a twenty-two-question survey known as the Maslach Burnout Inventory, which, for nearly four decades, has been used to track the well-being of workers across a vast range of occupations, from prison guards to teachers.

In recent years, it has become apparent that doctors have developed extraordinarily high burnout rates. In 2014, fifty-four per cent of physicians reported at least one of the three symptoms of burnout, compared with forty-six per cent in 2011. Only a third agreed that their work schedule “leaves me enough time for my personal/family life,” compared with almost two-thirds of other workers. Female physicians had even higher burnout levels (along with lower satisfaction with their work-life balance). A Mayo Clinic analysis found that burnout increased the likelihood that physicians switched to part-time work. It was driving doctors out of practice.

Burnout seemed to vary by specialty. Surgical professions such as neurosurgery had especially poor ratings of work-life balance and yet lower than average levels of burnout. Emergency physicians, on the other hand, had a better than average work-life balance but the highest burnout scores. The inconsistencies began to make sense when a team at the Mayo Clinic discovered that one of the strongest predictors of burnout was how much time an individual spent tied up doing computer documentation. Surgeons spend relatively little of their day in front of a computer. Emergency physicians spend a lot of it that way. As digitization spreads, nurses and other health-care professionals are feeling similar effects from being screen-bound.

Sadoughi told me of her own struggles—including a daily battle with her Epic “In Basket,” which had become, she said, clogged to the point of dysfunction. There are messages from patients, messages containing lab and radiology results, messages from colleagues, messages from administrators, automated messages about not responding to previous messages. “All the letters that come from the subspecialists, I can’t read ninety per cent of them. So I glance at the patient’s name, and, if it’s someone that I was worried about, I’ll read that,” she said. The rest she deletes, unread. “If it’s just a routine follow-up with an endocrinologist, I hope to God that if there was something going on that they needed my attention on, they would send me an e-mail.” In short, she hopes they’ll try to reach her at yet another in-box.

As I observed more of my colleagues, I began to see the insidious ways that the software changed how people work together. They’d become more disconnected; less likely to see and help one another, and often less able to. Jessica Jacobs, a longtime office assistant in my practice—mid-forties, dedicated, with a smoker’s raspy voice—said that each new software system reduced her role and shifted more of her responsibilities onto the doctors. Previously, she sorted the patient records before clinic, drafted letters to patients, prepped routine prescriptions—all tasks that lightened the doctors’ load. None of this was possible anymore. The doctors had to do it all themselves. She called it “a ‘stay in your lane’ thing.” She couldn’t even help the doctors navigate and streamline their computer systems: office assistants have different screens and are not trained or authorized to use the ones doctors have.

“You can’t learn more from the system,” she said. “You can’t do more. You can’t take on extra responsibilities.” Even fixing minor matters is often not in her power. She’d recently noticed, for instance, that the system had the wrong mailing address for a referring doctor. But, she told me, “all I can do is go after the help desk thirteen times.”

Jacobs felt sad and sometimes bitter about this pattern of change: “It’s disempowering. It’s sort of like they want any cookie-cutter person to be able to walk in the door, plop down in a seat, and just do the job exactly as it is laid out.”

Sadoughi felt much the same: “The first year Epic came in, I was so close to saying, ‘That’s it. I’m done with primary care, I’m going to be an urgent-care doctor. I’m not going to open another In Basket.’ It took all this effort reëvaluating my purpose to stick with it.”

Gregg Meyer sympathizes, but he isn’t sorry. As the chief clinical officer at Partners HealthCare, Meyer supervised the software upgrade. An internist in his fifties, he has the commanding air, upright posture, and crewcut one might expect from a man who spent half his career as a military officer.

“I’m the veteran of four large-scale electronic-health-records implementations,” he told me in his office, overlooking downtown Boston. Those included two software overhauls in the military and one at Dartmouth-Hitchcock Medical Center, where he’d become the chief clinical officer. He still sees patients, and he experiences the same frustrations I was hearing about. Sometimes more: he admits he’s not as tech-savvy as his younger colleagues.

“But we think of this as a system for us and it’s not,” he said. “It is for the patients.” While some sixty thousand staff members use the system, almost ten times as many patients log into it to look up their lab results, remind themselves of the medications they are supposed to take, read the office notes that their doctor wrote in order to better understand what they’ve been told. Today, patients are the fastest-growing user group for electronic medical records.

Computerization also allows clinicians to help patients in ways that hadn’t been possible before. In one project, Partners is scanning records to identify people who have been on opioids for more than three months, in order to provide outreach and reduce the risk of overdose. Another effort has begun to identify patients who have been diagnosed with high-risk diseases like cancer but haven’t received prompt treatment. The ability to adjust protocols electronically has let Meyer’s team roll out changes far faster as new clinical evidence comes in. And the ability to pull up records from all hospitals that use the same software is driving real improvements in care.

Meyer gave me an example. “The care of the homeless population of Boston took a quantum leap,” he said. With just a few clicks, “we can see the fact that they had three TB rule-outs”—three negative test results for tuberculosis—“someplace else in town, which means, O.K., I don’t have to put him in an isolation room.”

In Meyer’s view, we’re only just beginning to experience what patient benefits are possible. A recent study bolsters his case. Researchers looked at Medicare patients admitted to hospitals for fifteen common conditions, and analyzed how their thirty-day death rates changed as their hospitals computerized. The results shifted over time. In the first year of the study, deaths actually increased 0.11 per cent for every new function added—an apparent cost of the digital learning curve. But after that deaths dropped 0.21 per cent a year for every function added. If computerization causes doctors some annoyance but improves patient convenience and saves lives, Meyer is arguing, isn’t it time we all got on board?

“I’m playing the long game,” he said. “I have full faith that all that stuff is just going to get better with time.”

And yet it’s perfectly possible to envisage a system that makes care ever better for those who receive it and ever more miserable for those who provide it. Hasn’t this been the story in many fields? The complaints of today’s health-care professionals may just be a white-collar, high-tech equivalent of the century-old blue-collar discontent with “Taylorization”—the industrial philosophy of fragmenting work into components, standardizing operations, and strictly separating those who design the workflow from those who do the work. As Frederick Winslow Taylor, the Progressive Era creator of “scientific management,” put it, “In the past, the man has been first; in the future, the system must be first.” Well, we are in that future, and the system is the computer.

Indeed, the computer, by virtue of its brittle nature, seems to require that it come first. Brittleness is the inability of a system to cope with surprises, and, as we apply computers to situations that are ever more interconnected and layered, our systems are confounded by ever more surprises. By contrast, the systems theorist David Woods notes, human beings are designed to handle surprises. We’re resilient; we evolved to handle the shifting variety of a world where events routinely fall outside the boundaries of expectation. As a result, it’s the people inside organizations, not the machines, who must improvise in the face of unanticipated events.

Last fall, the night before daylight-saving time ended, an all-user e-mail alert went out. The system did not have a way to record information when the hour from 1 a.m. to 1:59 a.m. repeated in the night. This was, for the system, a surprise event. The only solution was to shut down the lab systems during the repeated hour. Data from integrated biomedical devices (such as monitoring equipment for patients’ vital signs) would be unavailable and would have to be recorded by hand. Fetal monitors in the obstetrics unit would have to be manually switched off and on at the top of the repeated hour.

Medicine is a complex adaptive system: it is made up of many interconnected, multilayered parts, and it is meant to evolve with time and changing conditions. Software is not. It is complex, but it does not adapt. That is the heart of the problem for its users, us humans.

Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice—in medicine and in other fields—was all mutation and no selection. There was plenty of room for individuals to do things differently from the norm; everyone could be an innovator. But there was no real mechanism for weeding out bad ideas or practices.

Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn’t wreck some other, distant part of the system.

For those in charge, this kind of system oversight is welcome. Gregg Meyer is understandably delighted to have the electronic levers to influence the tens of thousands of clinicians under his purview. He had spent much of his career seeing his hospitals blighted by unsafe practices that, in the paper-based world, he could do little about. A cardiologist might decide to classify and treat patients with congestive heart failure differently from the way his colleagues did, and with worse results. That used to happen all the time.

“Now there’s a change-control process,” Meyer said. “When everything touches everything, you have to have change-control processes.”

But those processes cannot handle more than a few change projects at a time. Artisanship has been throttled, and so has our professional capacity to identify and solve problems through ground-level experimentation. Why can’t our work systems be like our smartphones—flexible, easy, customizable? The answer is that the two systems have different purposes. Consumer technology is all about letting me be me. Technology for complex enterprises is about helping groups do what the members cannot easily do by themselves—work in coördination. Our individual activities have to mesh with everyone else’s. What we want and don’t have, however, is a system that accommodates both mutation and selection.

Human beings do not only rebel. We also create. We force at least a certain amount of mutation, even when systems resist. Consider that, in recent years, one of the fastest-growing occupations in health care has been medical-scribe work, a field that hardly existed before electronic medical records. Medical scribes are trained assistants who work alongside physicians to take computer-related tasks off their hands. This fix is, admittedly, a little ridiculous. We replaced paper with computers because paper was inefficient. Now computers have become inefficient, so we’re hiring more humans. And it sort of works.

Not long ago, I spent a day following Lynden Lee as he scribed at a Massachusetts General Hospital primary-care practice. Lee, a twenty-three-year-old graduate of Boston University, is an Asian-American raised in Illinois, and, like many scribes, he was doing the job, earning minimum wage, while he applied to medical school. He worked for Allan Goroll, a seventy-two-year-old internist of the old school—fuzzy eyebrows, steel-wool hair, waist-length white coat.

https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers