Trust

Write your own headline. Choose one from each category below, fiddle with the grammar so it makes sense and bingo!

NHS organisationDid something badResulting in
HospitalMissed opportunitiesPatient harm or death
GP PracticeHad toxic cultureInquiry
Integrated care boardIgnored concernsSenior resignation
etcMismanaged financesFine

Had operational failuresCQC inspection or censure

etc
Workforce disengagement


Postcode lottery


etc

Familiar themes, so often repeated that we become immune to them, made explicit by the many enquiries into healthcare failures: Ely, Bristol Heart, Mid-Staffordshire, Morecambe Bay, Shipman. What have we leaned? More importantly, what have we done to prevent similar things happening again?

One of the themes of healthcare enquiries is an assessment of accountability: who knew about the failure and when? Or if the failure was unrecognised, who should have known and why didn’t they know?

Frequently after an enquiry there are resignations, retirements or dismissals of senior staff who are held accountable: the average NHS Trust Chief Executive is in post for about 3 years. Something has gone wrong so heads must roll. Accountability is clearly important – executives are defined by their ability to make significant organisational decisions and by inference therefore be responsible for them. It’s not unreasonable that executives and senior management are held accountable for failures that occur on their watch. The problem with this approach though is that an assessment of individual (or corporate) accountability is frequently insufficient to understand the causative factors in a failure. The system context in which decisions are made, strategies decided upon and priorities chosen is critical and many systematic factors are outside the direct control of even senior executives. 

We have become familiar with the concept of a no-blame culture in healthcare even if it remains largely a unicorn concept: that people (staff, patients, the press, legal teams, politicians) might approach episodes of poor care or poor outcome with an open, curious and non-judgemental manner, searching for answers to make things better rather than focussing on liability. The benefits of this approach are well documented, and it is deeply culturally embedded in some industries, especially aviation, as the opening paragraphs of most air accident investigation reports clearly attest. By avoiding scapegoating we enable all colleagues to contribute to an investigation in a spirit of psychological safety, not worried about their career, their livelihood, even their liberty. In doing so we gain a wealth of system intelligence about reasoning, about the why and how, and not just the what. 

In my experience most people try their best and while some are more capable than others, few professional people in healthcare make deliberately self-interested or reckless decisions whatever their seniority. Executives should listen and be curious about the impact of their decisions; they may need to be brave in their choices and carry them to a conclusion without being defensive. They need to be sure of their values and transparent in how these frame their decisions. But it’s not reasonable to expect them always to be right, and by extension not reasonable to blame them (unless they have been wilfully blind) for decisions that turn out to be wrong, even if they are accountable for them.

For a no-blame culture in enquiry to flourish, it requires an essential ingredient: trust. Without trust, enquiry becomes adversarial inquisition and opportunity for true learning is lost.

Decision makers must trust that their decisions, whether strategic, tactical or operational, will be reviewed objectively and without bias. Executive decisions are frequently made in the face of significant uncertainty, system volatility and outcome ambiguity so a decision that turns out to be hopelessly wrong (or have adverse unanticipated consequences) may still be made honestly and in good faith. In order to feel safe sharing information about how and why decisions were made, executives will need to trust the enquiry process, its chair and its scope and terms of reference.

Just as (if not more) importantly, the public need to trust that a process that does not result in blame and individual censure is not the same thing as a cover-up. They will need educating that in a complex system accountability is frequently  delegated, diffuse and nebulous: a product of organic (and sometimes chaotic) organisational evolution rather than purposive design by an individual or group to whom responsibility can easily be apportioned. In the context of our current sociopolitical discourse, this is a hard sell.

How do we reconcile a no-blame culture (with all the system intelligence it brings) with the need for executive accountability? How high up an organisation should a no-blame culture extend? How can we maintain public faith in a process while enabling those experiencing it to speak freely and without anger, paranoia or fear. Only with trust.

I wonder if our failures to prevent recurrent harm in healthcare are related to our lack of trust, resulting in a willingness to seek accountability and then apportion blame. Blame allows us to embody the failings of a system in an individual. It gives the system a face and a focal point for our distress, anger or confusion. But the risk is that having fulfilled our atavistic desire for redress we lose interest in the hard work of system redesign, cultural change and investment in people, process and capital that might actually make a difference. Removing accountable staff is simple, easy, and cheap. System change is often complex, challenging and expensive, possibly prohibitively so.

So are we ready for a no-blame culture? Are our politicians, our profession, our legal system and the public really aware of the revolution in mindset needed? Do we have the trust in institutions, experts and process that such a culture requires? I am not hopeful.

Despite the exhortations from the great and the good, from multiple Secretaries of State for Health, the reports from august bodies, the hand wringing and introspection, we continue to blame and we continue to fail. Is it inevitable that, just as with politicians, the careers of all senior healthcare executives end in failure? Much more importantly, is it inevitable that we will keep not learning the same lessons over and over again?

Without trust, I fear it is.

My suspicion of AI in healthcare and everywhere else.

AI – it’s everywhere: there every time a politician pronounces on how to transform productivity in all industries, healthcare included, each time you open a newspaper or watch TV, in conversations over coffee, in advertising and culture. AI, however ambiguously defined, is the new ‘white heat of technology’.

In her excellent book ‘Artificial Intelligence: A Guide for Thinking Humans’ Melanie Mitchell discusses the  cycles of AI enthusiasm, from gushing AI boosterism to disappointment, rationalisation or steady and considered incorporation. She likens this cycle to the passing of the seasons – AI spring followed by an inevitable AI winter. The recent successes of AI, and in particular the rapid development of large language models like ChatGPT have resulted in a sustained period of AI spring, with increasingly ambitious claims made for the technology, fuelled by the hubris of the ‘Bitter Lesson’ – that any human problem might be solvable not by thought, imagination, innovation or collaboration but simply by throwing enough computing power at it.

These seem exaggerated claims. Like many technologies, AI may be excellent for some things, not so good for others, and we have not learned to tell the difference. Most human problems come with a panoply of complexities that prevent wholly rational solutions. Personal (or corporate) values, prejudices, experience, intuition, emotion, playfulness and a whole host of other intangible human traits factor into their management. For example AI is great at transcribing speech (voice recognition) but understanding spoken meaning is an altogether different problem laden with glorious human ambiguity. When a UK. English speaker says “not bad” that can mean anything from amazing to deeply disappointing. 

In our work as radiologists we live this issue of problem misappropriation every day. We understand there is a world of difference between the simple question ‘what’s on this scan’ and the much more challenging ‘which of the multiple findings on this scan is relevant to my patient in the context of their clinical presentation and what does this mean for their care’. Thats why we call ourselves Clinical Radiologists, why we have MDT meetings. Again, what seems like a simple problem may be, in fact, hugely complex. To suggest (as some have) that certain professions will be rendered obsolete by AI is to utterly misunderstand those professions, and the nature of the problems their human practitioners apply themselves to.

Why do we struggle to separate AI reality from hubristic overreach? Partly this is due to inevitable marketing and investor hype, but I also think the influence of literature and popular culture is has an important role. Manufactured sentient agents are a common fictional device: from Frankenstein’s Monster via Hal 9000 to the Cyberdyne T800 or Ash of modern Science Fiction. But we speak about actual AI using the same language as we do these fictional characters (and they are characters – that’s the point), imbuing it with anthropomorphic talents and motivations that are far divorced from today’s reality. We describe it as learning, as knowing, but we have no idea what this means. We are beguiled by its ability to mimic our language but don’t question the underlying thought. In short, we think of AI systems more like people than like a tool limited in purpose and role. To steal a quote, we forget that these systems know everything about what they know, and nothing about anything else (there it is again: ‘know’?). Because we can solve complex problems, we think AI can, and in the same way.

Here’s an example. In studies of AI image interpretation, neural networks ‘learn’ from a ‘training’ dataset. Is this training and learning in the way we understand it? 

Think about how you train a radiologist to interpret a chest radiograph. After embedding the the routine habit of demographic checking, you teach the principles of x-ray absorption in different tissues, then move on to helping them understand the silhouette sign and how the image findings fall inevitably, even beautifully, from the pathological processes present in the patient. It’s true that over time, with enough experience, a radiologist develops ‘gestalt’ or pattern recognition, meaning they don’t have to follow each of the steps to compose the  report, they just ‘know’. But occasionally gestalt fails and they need to fall back to first principles.

What we do not do is give a trainee 100,000 CXRs, each tagged with the diagnosis and ask them to make up their own scheme for interpreting them. Yet this is exactly how we train an AI system: we give it a stack of labelled data and away it goes. There is no pedagogy, mentoring, understanding, explanation or derivation of first principles. There is merely the development of a statistical model in the hidden layers of the software’s neural network which may or may not produce the same output as the human. Is this learning?

In her book, Mitchell provides some examples of how an AI’s learning is different (and I would say inferior) to human understanding. She describes ‘adversarial attacks’ where the output from a system designed to interpret an image can be rendered wholly inaccurate by altering a single pixel within it, a change invisible to a human observer. More illustratively, she describes a system designed to identify whether an image contained a bird, trained on a vast number of images containing, and not containing, birds. But what the system actually ‘learned’ was not to identify a feathered animal but identify a blurred background. Because it turns out, most photos of birds are taken with a long lens, a shallow depth of field and a strong bokeh. So the system associated the bokeh with the tag ‘bird’. Why wouldn’t it without a helping hand of a parent, a teacher or a guide to point out its mistake.

Is a machine developed in this way actually learning in the way we use the term? I’d argue it isn’t and to suggest so implies much more than the system calibration actually going on. Would you expect the same from a self-calibrating neural network as a learning machine? Language matters: using less anthropomorphic terms allows us to think of AI systems as tools, not as entries.

We are used to deciding the best tool for a given purpose. Considering AI more instrumentally, as a tool, allows us the space to articulate more clearly what problem we want to solve, where an AI system would usefully be deployed and what other options might be available. For example, Improving the immediate interpretation of CXRs by patent facing (non-radiology) clinicians might be best served by an AI support tool, an education programme or brief induction refresher, increases in reporting capacity or all four. Which of those things should a department invest in? Framing the question in this way at least encourages us to consider all alternatives, human and machine, and to weigh up the governance and economic risks of each more objectively.  How often does that assessment happen? I’d venture, rarely. Rather the technocratic allure of the new toy wins out and alternatives are either ignored or at least incompletely explored.

So to me, AI is a tool, like any other. My suspicion of it derives from my observation that what is promised for AI goes way beyond what is likely to be deliverable, that our language about it inappropriately imbues it with human traits, and that it crowds out human solutions which are rarely given equal consideration.

Melanie Mitchell concludes her book with a simple example, a question so basic that it seems laughable. What does ‘it’ refer to in the following sentence:

The table won’t fit through the door: it is too big.

AI struggles with questions like this. We can answer this because we know what a table is, and what a door is: concepts derived from our lived experience and our labelling of that experience. We know that doors don’t go through tables, but that tables may be sometimes carried through doors. This knowledge is not predicated on assessment of a thousand billion sentences containing the word door and table, and the likelihood of the words appearing in a certain order. It’s based in what we term ‘common sense’. 

No matter how reductionist your view of the mind as a product of the human brain, to reduce intelligence to a mere function of the number of achievable teraflops per second ignores that past experience of the world, nurture, relationships, personality and many other traits legitimately shape our common sense, thinking, decision making and problem solving. AI systems are remarkable achievements, but there is a way to go before I’ll lift my scepticism of their role as anything other than as a tool to be deployed judiciously and alongside other, human, solutions.

Demand

Is the NHS under-resourced to deliver what is asked of it? Estimates from august think tanks and national audits describe that it is, the scale of the under-resourcing and the deficits in staffing and infrastructure created. The Darzi report identified a £11.6bn backlog in capital expenditure in the NHS in England. We have fewer beds (2.4 vs 4.3 per thousand), doctors (3.2 vs 3.7 per thousand) and scanners (19 vs 41 per million) than our OECD comparators. To keep pace with demographic changes, new technologies and drugs and the increased use of some surgical procedures, it’s estimated healthcare provision should increase 4% year on year. All OECD countries struggle with increasing healthcare spend.

Radiology services are on the sharp end of this demand growth. Imaging demand is increasing year-on-year at about 5% in the UK. For complex cross sectional imaging, demand growth was 11% in 2023 alone. Unplanned and out-of-hours imaging demand has increased 40% in 5 years. It’s rare for a clinical initiative or guideline to suggest we need less imaging, or less urgent imaging. Getting it Right First Time usually requires early imaging to make certain an uncertain clinical picture. The development of new therapies often mandates more, and more frequent, imaging.  The Richards Report indicated that a 20% increase in imaging delivery was needed.

Can we control this healthcare growth? The idea of demand control in healthcare is fraught with complex ethical and moral dilemmas about access to treatment, the nature of the doctor-patient relationship and the needs of the individual versus those of the collective. The language we use (‘rationing’, ‘postcode lottery’, ‘playing God’) and powerful stories about individuals or groups denied care on the basis of decision making by ‘faceless bureaucrats’ means that rational debate about demand management in healthcare is challenging. Demand management calls into question what we mean by comprehensive healthcare and how society should respond to the needs of vulnerable people.

Even discussion about prevention and public health, effective and on-the-face-of-it uncontroversial ways to improve population health and thereby control demand long term, is freighted with unhelpful language (‘nanny statism’) and arguments about personal liberty and choice (the latter supported by powerful corporate lobbyists whose interests are risked by state interventions for smoking, alcohol and obesity). Initiatives targeting the most needy and aimed at equitable (rather than equal) resource distribution are sometimes denigrated as ‘woke’.

In the financial year 2022-23, the UK government spent £239bn on healthcare (mainly on the NHS), 18% of the total public-sector spend and 11% of GDP. At 4% growth in 10 years time this figure will be (a back of the envelope calculation) 50% greater. Healthcare spending, often protected, has already increased at the expense of other government departmental spending (especially defence – see figure) with little further room for cannibalisation of other budgets. The often advocated narrative of economic growth to deliver spending resource seems a forlorn and remote aspiration given anaemic growth figures for the UK and most other advanced economies over the last decade.

Health (green) and defence (magenta) spending as share of GDP 1955-2021

Figure source: Institute for Fiscal Studies Taxlab. What does the government spend money on?


There are undoubtedly productivity gains to be made and in radiology many potential solutions are well rehearsed: comprehensive and careful request vetting, electronic systems to support it (and to feedback to referring colleagues), decision support tools (such as iRefer) at the point of request and visibility of requests and booked scan appointments within the electronic patient record are all technical innovations that can improve a requesting culture, reduce duplication and deliver marginal reductions in demand. Skill mix and better use of radiographer reporting can help with workload and is already well established for some teams and imaging types (especially ultrasound and plain film imaging). Perhaps artificial intelligence will finally deliver its promise? Will this be enough? I doubt it.

So how can we deliver? With our current model of healthcare, ultimately, we will not be able to. The spending graphs for healthcare as a proportion of GDP extrapolate to this inevitability. Without rethinking the model, services will fail, little by little and around the edges at first in a myriad unplanned ways. The deterioration will manifest as longer waiting times and failure to meet constitutional and other standards, increases in falls, failures in infection prevention and control, loss of access for marginalised groups, estate degradation, workforce crises, increased complaints and litigation and in other, sometimes immeasurable, important ways. Does this sound familiar? It’s happening already. The irony is that as we spend more on increasingly expensive, process focussed, fractured and technology driven healthcare, we deliver less health and the experience of service users deteriorates. Healthcare delivery is more than just logistics.

We cannot address delivery without controlling demand in a systemwide manner. This especially applies to complex new therapies, imaging and drugs (which are the primary drivers of increased spending). Practical demand management is hard because we assume more healthcare equals better health, are beguiled by technology, no longer understand risk and are wedded to pathway solutions that reduce some of the intangibles of the human interaction between a patient and a healthcare professional to nodes on a decision tree from which every branch results in more to do. It is also hard because our political structures rely on promises made in a brief electoral cycle, subordinating the ability of our institutions to undertake long term planning. Complex decisions like those that are needed to equitably and ethically address demand are ignored because there will be politically unpalatable losses in the medium term while the wins may take many years to manifest. 

What’s the solution? A massive funding pivot to primary care and its ability to resolve many simple issues quickly, cheaply and effectively? Removing healthcare delivery from governmental control altogether, sacking the Secretary of State and assigning a fixed proportion of GDP for 25 years to allow long term planning? Addressing the social determinants of health: education, housing, lifestyle choice, opportunity, inequality? Robust implementation of cost-effectiveness principles in healthcare design? Public education about risk? Promotion of a stoic understanding of what it means to live a good life, knowing that death is inevitable? 

If all that seems too far outside your zones of control or influence then perhaps in your day-to-day practice take a moment to consider the things you can change. Each time you make a decision, ask yourself: is this test, treatment, referral or innovation really needed? Who am I treating, the patient or myself? Is it easier to do the wrong thing than the right thing and if so, why? Am I too busy to think about this? Am I too proud or too anxious to ask for help? We all have a role to play in identifying pointless, wasted or supplier-induced demand.  Making better small decisions every day is achievable and accumulations of hundreds of thousands of tiny marginal gains can have a big effect. This will not be sufficient on its own, but it’s necessary, vitally so.

Demand. It’s the elephant in the room of healthcare funding. Ignore it and sooner or later we’ll all be trampled. It’s our urgent responsibility as healthcare professionals to act to control demand, even if our government seems unable to.

Moral hazard in a failing service

I go to see a woman on the ward to tell her that, again, her procedure is cancelled. I see, written in the resigned expression on her face, the effort and emotional energy it has taken to get herself here: arrangements she made about the care of her household, relatives providing transport from her home over 70 miles away and now unexpectedly called to pick her up. A day waiting, the anxiety building as a 9am appointment became 10, then lunchtime, then afternoon. The tedious arrangements to be necessarily repeated: COVID swabs, blood tests, anticoagulation bridging. All wasted.

She smiles at me as I apologise. She is kind, rather than angry, understanding rather than belligerent. And yet she has every right to be furious. This is, after all, the second time this has happened. And she knows as well as I do that my attempts at assurance that we will prioritise her bed for the next appointment she is offered are as empty and meaningless as they were last time she heard them.

Such stories are the everyday reality for patients and clinicians within the NHS, repeated thousands of times a day across the country, each one a small quantum of misery. At least my patient got an appointment. Some don’t. Ask anyone with a condition that is not life threatening or somehow subject to media scrutiny or an arbitrary governmental target about their access to planned hospital care and you will likely get a snort of derision or a sob of hopelessness. Benign gynaecological conditions (for example) can be debilitating but frequently slip to the bottom of the priority list, suffered in private silence, without advocates able to leverage the rhetorical and emotional weight of a cancer diagnosis.

This is not all COVID related. Yes, COVID has made things worse but really all the pandemic has done is cruelly reveal the structural inadequacies that we have been working around in the NHS for years and years. ‘Winter pressures’ have reliably and predictably closed planned care services even if it took until winter 2017 for the NHS to officially recognise this and cancel all elective surgery for weeks. Estate is often old and not fit for purpose. Departmental and ward geography does not allow for the patient separation and flow demanded by modern healthcare. Staffing rotas are stretched to the limit with no redundancy for absence. Old infrastructure and equipment requires inefficient workarounds. Increasing effort goes into Byzantine plans for ‘service continuity’ to deal with operational risks, while the fundamentals remain unaddressed.

Efficiency requires investment. You cannot move from a production line using humans to one using robots without investing in the robots to do the work and the skilled people to run them. You cannot move from an inpatient to an outpatient model of care for a condition without investing in the infrastructure and people to oversee that pathway. You cannot manage planned and unplanned care via a single resource without adversely affecting the efficiency of both. You cannot expect a hugely expensive operating theatre or interventional radiology suite to function productively if the personnel tasked with running it spend a significant proportion of their day juggling cases and staff in an (often vain) attempt to get at least a few patients ready and through the department. Modern healthcare requires many systems to function optimally (or at least adequately) before anything can be done. Expensive resources frequently lie idle when a failure in one process results in the entire patient pathway collapsing.

The moral hazard encountered by people working in this creaking system is huge. How can we feel proud of the service we offer when failure is a daily occurrence? When we, the patient facing front-of-house, are routinely embarrassed by – or apologetic for – the system which we represent. We can retreat into the daily small victories: a patient treated well, with compassion, leaving satisfied; an emergency expertly, efficiently and speedily dealt with; teamwork. But these small victories seem to be less and less consoling as the failures mount. Eventually staff (people after all) lose belief, drive and motivation. Disillusionment breeds diffidence, apathy and disengagement. The service, reliant on motivated and culturally engaged teams, becomes less safe, less caring, less personal and even more inefficient as staff are no longer inclined to work occasionally over and above their job planned activity. A bureaucracy of resource management develops and teams become splintered. Process replaces culture and a credentialed skill-mix replaces trusted professional relationships.

The moral hazard is compounded by the seemingly wilful blindness of our political masters, the holders of the purse strings, to comprehend the size of the problem. Absent any real prospect of improvement, we learn to accept the status-quo, the cancellations, the delay, the waiting lists. And our patients accept this too: how else does one explain their weary stoicism. Meanwhile our leaders cajole us to be more efficient, to embrace new ways of working, to do a lot more with a bit more money. It remains politically expedient to disguise a few percent increase in healthcare revenue spending as ‘record investment’ but I argue that most people working at all levels in the NHS recognise the need for transformative generational investment on a level not seen since the inception of the service. Such investment requires money and money means taxation.

More than that, there needs to be the political bravery to open a considered debate about what we mean by healthcare, where our money is most efficiently targeted and what we, as a society can (or are willing to) afford in amongst other priorities for governmental spending. Shiny new hospitals providing state-of-the-art treatment may make good PR but are meaningless without functional well funded primary care. Investment in complex clinical technologies will not improve our nation’s health if the social determinants of this (poverty, smoking, diet, housing, education, joblessness, social exclusion) remain unaddressed. Such a discourse seems anathema to our current politics with its emphasis on the individual, on technocratic solutions and on the empty promise of being able to have everything we want at minimal personal, environmental or societal cost.

Until our leaders start this debate, and until we, as members of society, understand the arguments and elect politicians to enact its conclusions, ‘our NHS’ will continue to provide sometimes substandard and inefficient care in a service defined by its own introspection rather than by the needs of the community it should serve. Our healthcare metrics will continue to lag behind those of comparator nations. And I will continue to find myself, late in the afternoon, apologising to women and men for the inconvenience and anxiety as I speak to them about cancelling their procedure, hating myself for it but helpless to offer any solution or solace.