Trust

Write your own headline. Choose one from each category below, fiddle with the grammar so it makes sense and bingo!

NHS organisationDid something badResulting in
HospitalMissed opportunitiesPatient harm or death
GP PracticeHad toxic cultureInquiry
Integrated care boardIgnored concernsSenior resignation
etcMismanaged financesFine

Had operational failuresCQC inspection or censure

etc
Workforce disengagement


Postcode lottery


etc

Familiar themes, so often repeated that we become immune to them, made explicit by the many enquiries into healthcare failures: Ely, Bristol Heart, Mid-Staffordshire, Morecambe Bay, Shipman. What have we leaned? More importantly, what have we done to prevent similar things happening again?

One of the themes of healthcare enquiries is an assessment of accountability: who knew about the failure and when? Or if the failure was unrecognised, who should have known and why didn’t they know?

Frequently after an enquiry there are resignations, retirements or dismissals of senior staff who are held accountable: the average NHS Trust Chief Executive is in post for about 3 years. Something has gone wrong so heads must roll. Accountability is clearly important – executives are defined by their ability to make significant organisational decisions and by inference therefore be responsible for them. It’s not unreasonable that executives and senior management are held accountable for failures that occur on their watch. The problem with this approach though is that an assessment of individual (or corporate) accountability is frequently insufficient to understand the causative factors in a failure. The system context in which decisions are made, strategies decided upon and priorities chosen is critical and many systematic factors are outside the direct control of even senior executives. 

We have become familiar with the concept of a no-blame culture in healthcare even if it remains largely a unicorn concept: that people (staff, patients, the press, legal teams, politicians) might approach episodes of poor care or poor outcome with an open, curious and non-judgemental manner, searching for answers to make things better rather than focussing on liability. The benefits of this approach are well documented, and it is deeply culturally embedded in some industries, especially aviation, as the opening paragraphs of most air accident investigation reports clearly attest. By avoiding scapegoating we enable all colleagues to contribute to an investigation in a spirit of psychological safety, not worried about their career, their livelihood, even their liberty. In doing so we gain a wealth of system intelligence about reasoning, about the why and how, and not just the what. 

In my experience most people try their best and while some are more capable than others, few professional people in healthcare make deliberately self-interested or reckless decisions whatever their seniority. Executives should listen and be curious about the impact of their decisions; they may need to be brave in their choices and carry them to a conclusion without being defensive. They need to be sure of their values and transparent in how these frame their decisions. But it’s not reasonable to expect them always to be right, and by extension not reasonable to blame them (unless they have been wilfully blind) for decisions that turn out to be wrong, even if they are accountable for them.

For a no-blame culture in enquiry to flourish, it requires an essential ingredient: trust. Without trust, enquiry becomes adversarial inquisition and opportunity for true learning is lost.

Decision makers must trust that their decisions, whether strategic, tactical or operational, will be reviewed objectively and without bias. Executive decisions are frequently made in the face of significant uncertainty, system volatility and outcome ambiguity so a decision that turns out to be hopelessly wrong (or have adverse unanticipated consequences) may still be made honestly and in good faith. In order to feel safe sharing information about how and why decisions were made, executives will need to trust the enquiry process, its chair and its scope and terms of reference.

Just as (if not more) importantly, the public need to trust that a process that does not result in blame and individual censure is not the same thing as a cover-up. They will need educating that in a complex system accountability is frequently  delegated, diffuse and nebulous: a product of organic (and sometimes chaotic) organisational evolution rather than purposive design by an individual or group to whom responsibility can easily be apportioned. In the context of our current sociopolitical discourse, this is a hard sell.

How do we reconcile a no-blame culture (with all the system intelligence it brings) with the need for executive accountability? How high up an organisation should a no-blame culture extend? How can we maintain public faith in a process while enabling those experiencing it to speak freely and without anger, paranoia or fear. Only with trust.

I wonder if our failures to prevent recurrent harm in healthcare are related to our lack of trust, resulting in a willingness to seek accountability and then apportion blame. Blame allows us to embody the failings of a system in an individual. It gives the system a face and a focal point for our distress, anger or confusion. But the risk is that having fulfilled our atavistic desire for redress we lose interest in the hard work of system redesign, cultural change and investment in people, process and capital that might actually make a difference. Removing accountable staff is simple, easy, and cheap. System change is often complex, challenging and expensive, possibly prohibitively so.

So are we ready for a no-blame culture? Are our politicians, our profession, our legal system and the public really aware of the revolution in mindset needed? Do we have the trust in institutions, experts and process that such a culture requires? I am not hopeful.

Despite the exhortations from the great and the good, from multiple Secretaries of State for Health, the reports from august bodies, the hand wringing and introspection, we continue to blame and we continue to fail. Is it inevitable that, just as with politicians, the careers of all senior healthcare executives end in failure? Much more importantly, is it inevitable that we will keep not learning the same lessons over and over again?

Without trust, I fear it is.

Why is online learning so soul-sapping.

I’ve done a couple of courses recently. Both were potentially valuable and relevant, appropriate for my job, novel and focussed on topics I didn’t know much about, led by knowledgeable tutors who clearly had wide experience. Despite these positives, I can remember barely anything about them. Why? Maybe because, like so many things since the beginning of the Covid pandemic, they were remote and online.

We’ve all been there, sitting in front of a computer screen, watching a tessellation of equally disengaged faces as the facilitator valiantly struggles through a slide deck, pausing occasionally for the mandatory 14 seconds for a response while the audience squirms in awkward and embarrassed silence. Any questions? Only “when will this purgatory end?”

This is, of course, a manifestation of the format. Exploration of ideas is hard enough in a room full of strangers and online the social norms are much less rehearsed – while the hand raise function allows people to speak, the resulting contributions are often sequential non-sequiters rather than a flowing conversation. The increased need to chair online means all comment is routed via the facilitator rather than the group discussing together. And it’s too easy to appear to be focussed when in fact your attention is wandering, hijacked by so many other feeds competing for your digital attention: email, WhatsApp, the cat.

But I think there is more to it than just the mechanics of Teams or Zoom and the presence of distractions that make online learning so unsatisfying. We attach less meaning to an online course, and we therefore value it less. It’s a more disposable commodity

In his (excellent) book, Alchemy, Rory Sutherland, (vice Chair of marketing and advertising company Ogilvy) discusses creating meaning in a product, and from it value (hence Alchemy: creating something from nothing). Meaning can be created by the imaginative use of packaging or advertising, by association, by pricing, by brand identity, reputation and reliability, by overcoming barriers to market entry (whether artificially created or real) and in many other ways. One of the messages of the book is that this meaning creation does not have to be logical and frequently isn’t. Some of the reasons Red Bull is successful are precisely because it tastes weird, is expensive and comes in a small can. The demand for Verleben goods increases with increased price, going against all accepted economic theory.

He also describes the importance of internal meaning creation. These are the stories we tell ourselves about a product when we buy or use it, and again they need not be rational or true. Internal meaning is the basis of the placebo effect and explains a host of human preferences and contemporary consumerism. It’s why people who criticise the luxury goods market completely miss the point: these goods make their owners feel better for no other reason than because they are expensive.

Think about attending an in person course. It might be held in a nice venue. You might get lunch, or at least a coffee and a biscuit. You may need to travel or even stay overnight. You will certainly need to set aside more time to attend than the duration of the course itself. You are likely to dress smartly to make an impression. You may get to meet some interesting people, perhaps go for a drink or a meal after, discuss the content. All these things create both an external and internal sense of meaning and value in the course which will foster greater engagement in its participants (even if the content and facilitator is exactly the same its online version). By and large, when you attend an in-person event, you are signalling to yourself that it is important and deserves your attention*.

Now think of the last online course you did. Did you bookend it with other work, squeeze it in between other commitments? Did you sit in a shirt and pyjama bottoms (admit it!)? Did you spend time after thinking about it or discussing the ideas with family, colleagues or friends? What does that unconscious signalling tell you about how you valued the course? Might this explain in part why online courses are so awful: because we don’t signal their value to ourselves as we might and it’s therefore okay to be distracted and switch off?

Content and delivery are essential. If the facilitator is just reading out their slides, maybe with some additional content interspersed sparsely amongst the tedium, then there is nothing you can do to resurrect the experience. But assuming the online course itself if exemplary, here are some top tips to create a more meaningful experience.

  1. Make time. Leave at least an hour before or after: a replacement for your travel time. Do something different in this time. Read a book. Listen to music. Anything but work. This is legitimate use of your time – you are mentally preparing for your course. If you have study leave, use it to take the whole day, or at least the morning or afternoon.
  2. Dress up. If you’d wear a suit and tie for an in-person course, do that. If you’d do your hair or wear make-up, do that. The idea is to simulate the effort your’d take if you were physically in the room with the other participants.
  3. Ensure the room you are in is quiet and free of distraction. Mute your notifications and turn off your phone. Clear your desk of anything not immediately relevant to the course. You wouldn’t use your laptop in a face-to-face meeting and you’d feel rude checking your phone. Apply those same standards to an online course.
  4. Take notes and review them in the time you’ve set aside after the course, to embed your learning. If you can, discuss them, ideally with someone else who took part. Schedule this in with a post-course call.
  5. Reward yourself with a treat afterwards. The more expensive the better. This is the equivalent of the course cost.

These things need not make sense. Why not sit in comfortable joggers? What’s a treat got to do with attending on online course? But that’s the point – meaning creation is subconscious and often illogical, as with Red Bull. 

If you find all the above far too much effort, you should consider whether the course is worth your time at all. Do you really want to attend? If you can’t create meaning or value, then your time may well be better spent doing something else.

As a final thought, perhaps meaning is one of the reasons mandatory training is so hated. No-one would deny that information governance, fire safety, infection prevention and control, safeguarding etc. are not rationally important but even ignoring the repetition (“Janet from admin has been asked by her neighbour to look up her colonoscopy results” – again!) when the compulsory completion of these courses feels like a meaningless exercise in corporate and regulatory risk management rather than something genuinely useful we are bound to resent the completion. 

There are lessons in Alchemy for designers and administrators of mandatory training. They can be offered online for completion late at night, in our underwear, squeezed in between appraisal paperwork and sleep; or they could be offered en-bloc, onsite, in work time, with colleagues, with coffee, with conversation. Which one of those best signals the importance of the course for the organisation? Which will be most meaningful for participants? Which one will have most impact?


*this is why scanner manufacturers prefer show you their new scanner in Europe when there is one down the road  already up and running. With apologies to colleagues in the West Midlands or South Yorkshire, Barcelona or Seville will always feel more consequential than Birmingham or Sheffield (Spanish readers may disagree).

My suspicion of AI in healthcare and everywhere else.

AI – it’s everywhere: there every time a politician pronounces on how to transform productivity in all industries, healthcare included, each time you open a newspaper or watch TV, in conversations over coffee, in advertising and culture. AI, however ambiguously defined, is the new ‘white heat of technology’.

In her excellent book ‘Artificial Intelligence: A Guide for Thinking Humans’ Melanie Mitchell discusses the  cycles of AI enthusiasm, from gushing AI boosterism to disappointment, rationalisation or steady and considered incorporation. She likens this cycle to the passing of the seasons – AI spring followed by an inevitable AI winter. The recent successes of AI, and in particular the rapid development of large language models like ChatGPT have resulted in a sustained period of AI spring, with increasingly ambitious claims made for the technology, fuelled by the hubris of the ‘Bitter Lesson’ – that any human problem might be solvable not by thought, imagination, innovation or collaboration but simply by throwing enough computing power at it.

These seem exaggerated claims. Like many technologies, AI may be excellent for some things, not so good for others, and we have not learned to tell the difference. Most human problems come with a panoply of complexities that prevent wholly rational solutions. Personal (or corporate) values, prejudices, experience, intuition, emotion, playfulness and a whole host of other intangible human traits factor into their management. For example AI is great at transcribing speech (voice recognition) but understanding spoken meaning is an altogether different problem laden with glorious human ambiguity. When a UK. English speaker says “not bad” that can mean anything from amazing to deeply disappointing. 

In our work as radiologists we live this issue of problem misappropriation every day. We understand there is a world of difference between the simple question ‘what’s on this scan’ and the much more challenging ‘which of the multiple findings on this scan is relevant to my patient in the context of their clinical presentation and what does this mean for their care’. Thats why we call ourselves Clinical Radiologists, why we have MDT meetings. Again, what seems like a simple problem may be, in fact, hugely complex. To suggest (as some have) that certain professions will be rendered obsolete by AI is to utterly misunderstand those professions, and the nature of the problems their human practitioners apply themselves to.

Why do we struggle to separate AI reality from hubristic overreach? Partly this is due to inevitable marketing and investor hype, but I also think the influence of literature and popular culture is has an important role. Manufactured sentient agents are a common fictional device: from Frankenstein’s Monster via Hal 9000 to the Cyberdyne T800 or Ash of modern Science Fiction. But we speak about actual AI using the same language as we do these fictional characters (and they are characters – that’s the point), imbuing it with anthropomorphic talents and motivations that are far divorced from today’s reality. We describe it as learning, as knowing, but we have no idea what this means. We are beguiled by its ability to mimic our language but don’t question the underlying thought. In short, we think of AI systems more like people than like a tool limited in purpose and role. To steal a quote, we forget that these systems know everything about what they know, and nothing about anything else (there it is again: ‘know’?). Because we can solve complex problems, we think AI can, and in the same way.

Here’s an example. In studies of AI image interpretation, neural networks ‘learn’ from a ‘training’ dataset. Is this training and learning in the way we understand it? 

Think about how you train a radiologist to interpret a chest radiograph. After embedding the the routine habit of demographic checking, you teach the principles of x-ray absorption in different tissues, then move on to helping them understand the silhouette sign and how the image findings fall inevitably, even beautifully, from the pathological processes present in the patient. It’s true that over time, with enough experience, a radiologist develops ‘gestalt’ or pattern recognition, meaning they don’t have to follow each of the steps to compose the  report, they just ‘know’. But occasionally gestalt fails and they need to fall back to first principles.

What we do not do is give a trainee 100,000 CXRs, each tagged with the diagnosis and ask them to make up their own scheme for interpreting them. Yet this is exactly how we train an AI system: we give it a stack of labelled data and away it goes. There is no pedagogy, mentoring, understanding, explanation or derivation of first principles. There is merely the development of a statistical model in the hidden layers of the software’s neural network which may or may not produce the same output as the human. Is this learning?

In her book, Mitchell provides some examples of how an AI’s learning is different (and I would say inferior) to human understanding. She describes ‘adversarial attacks’ where the output from a system designed to interpret an image can be rendered wholly inaccurate by altering a single pixel within it, a change invisible to a human observer. More illustratively, she describes a system designed to identify whether an image contained a bird, trained on a vast number of images containing, and not containing, birds. But what the system actually ‘learned’ was not to identify a feathered animal but identify a blurred background. Because it turns out, most photos of birds are taken with a long lens, a shallow depth of field and a strong bokeh. So the system associated the bokeh with the tag ‘bird’. Why wouldn’t it without a helping hand of a parent, a teacher or a guide to point out its mistake.

Is a machine developed in this way actually learning in the way we use the term? I’d argue it isn’t and to suggest so implies much more than the system calibration actually going on. Would you expect the same from a self-calibrating neural network as a learning machine? Language matters: using less anthropomorphic terms allows us to think of AI systems as tools, not as entries.

We are used to deciding the best tool for a given purpose. Considering AI more instrumentally, as a tool, allows us the space to articulate more clearly what problem we want to solve, where an AI system would usefully be deployed and what other options might be available. For example, Improving the immediate interpretation of CXRs by patent facing (non-radiology) clinicians might be best served by an AI support tool, an education programme or brief induction refresher, increases in reporting capacity or all four. Which of those things should a department invest in? Framing the question in this way at least encourages us to consider all alternatives, human and machine, and to weigh up the governance and economic risks of each more objectively.  How often does that assessment happen? I’d venture, rarely. Rather the technocratic allure of the new toy wins out and alternatives are either ignored or at least incompletely explored.

So to me, AI is a tool, like any other. My suspicion of it derives from my observation that what is promised for AI goes way beyond what is likely to be deliverable, that our language about it inappropriately imbues it with human traits, and that it crowds out human solutions which are rarely given equal consideration.

Melanie Mitchell concludes her book with a simple example, a question so basic that it seems laughable. What does ‘it’ refer to in the following sentence:

The table won’t fit through the door: it is too big.

AI struggles with questions like this. We can answer this because we know what a table is, and what a door is: concepts derived from our lived experience and our labelling of that experience. We know that doors don’t go through tables, but that tables may be sometimes carried through doors. This knowledge is not predicated on assessment of a thousand billion sentences containing the word door and table, and the likelihood of the words appearing in a certain order. It’s based in what we term ‘common sense’. 

No matter how reductionist your view of the mind as a product of the human brain, to reduce intelligence to a mere function of the number of achievable teraflops per second ignores that past experience of the world, nurture, relationships, personality and many other traits legitimately shape our common sense, thinking, decision making and problem solving. AI systems are remarkable achievements, but there is a way to go before I’ll lift my scepticism of their role as anything other than as a tool to be deployed judiciously and alongside other, human, solutions.

Demand

Is the NHS under-resourced to deliver what is asked of it? Estimates from august think tanks and national audits describe that it is, the scale of the under-resourcing and the deficits in staffing and infrastructure created. The Darzi report identified a £11.6bn backlog in capital expenditure in the NHS in England. We have fewer beds (2.4 vs 4.3 per thousand), doctors (3.2 vs 3.7 per thousand) and scanners (19 vs 41 per million) than our OECD comparators. To keep pace with demographic changes, new technologies and drugs and the increased use of some surgical procedures, it’s estimated healthcare provision should increase 4% year on year. All OECD countries struggle with increasing healthcare spend.

Radiology services are on the sharp end of this demand growth. Imaging demand is increasing year-on-year at about 5% in the UK. For complex cross sectional imaging, demand growth was 11% in 2023 alone. Unplanned and out-of-hours imaging demand has increased 40% in 5 years. It’s rare for a clinical initiative or guideline to suggest we need less imaging, or less urgent imaging. Getting it Right First Time usually requires early imaging to make certain an uncertain clinical picture. The development of new therapies often mandates more, and more frequent, imaging.  The Richards Report indicated that a 20% increase in imaging delivery was needed.

Can we control this healthcare growth? The idea of demand control in healthcare is fraught with complex ethical and moral dilemmas about access to treatment, the nature of the doctor-patient relationship and the needs of the individual versus those of the collective. The language we use (‘rationing’, ‘postcode lottery’, ‘playing God’) and powerful stories about individuals or groups denied care on the basis of decision making by ‘faceless bureaucrats’ means that rational debate about demand management in healthcare is challenging. Demand management calls into question what we mean by comprehensive healthcare and how society should respond to the needs of vulnerable people.

Even discussion about prevention and public health, effective and on-the-face-of-it uncontroversial ways to improve population health and thereby control demand long term, is freighted with unhelpful language (‘nanny statism’) and arguments about personal liberty and choice (the latter supported by powerful corporate lobbyists whose interests are risked by state interventions for smoking, alcohol and obesity). Initiatives targeting the most needy and aimed at equitable (rather than equal) resource distribution are sometimes denigrated as ‘woke’.

In the financial year 2022-23, the UK government spent £239bn on healthcare (mainly on the NHS), 18% of the total public-sector spend and 11% of GDP. At 4% growth in 10 years time this figure will be (a back of the envelope calculation) 50% greater. Healthcare spending, often protected, has already increased at the expense of other government departmental spending (especially defence – see figure) with little further room for cannibalisation of other budgets. The often advocated narrative of economic growth to deliver spending resource seems a forlorn and remote aspiration given anaemic growth figures for the UK and most other advanced economies over the last decade.

Health (green) and defence (magenta) spending as share of GDP 1955-2021

Figure source: Institute for Fiscal Studies Taxlab. What does the government spend money on?


There are undoubtedly productivity gains to be made and in radiology many potential solutions are well rehearsed: comprehensive and careful request vetting, electronic systems to support it (and to feedback to referring colleagues), decision support tools (such as iRefer) at the point of request and visibility of requests and booked scan appointments within the electronic patient record are all technical innovations that can improve a requesting culture, reduce duplication and deliver marginal reductions in demand. Skill mix and better use of radiographer reporting can help with workload and is already well established for some teams and imaging types (especially ultrasound and plain film imaging). Perhaps artificial intelligence will finally deliver its promise? Will this be enough? I doubt it.

So how can we deliver? With our current model of healthcare, ultimately, we will not be able to. The spending graphs for healthcare as a proportion of GDP extrapolate to this inevitability. Without rethinking the model, services will fail, little by little and around the edges at first in a myriad unplanned ways. The deterioration will manifest as longer waiting times and failure to meet constitutional and other standards, increases in falls, failures in infection prevention and control, loss of access for marginalised groups, estate degradation, workforce crises, increased complaints and litigation and in other, sometimes immeasurable, important ways. Does this sound familiar? It’s happening already. The irony is that as we spend more on increasingly expensive, process focussed, fractured and technology driven healthcare, we deliver less health and the experience of service users deteriorates. Healthcare delivery is more than just logistics.

We cannot address delivery without controlling demand in a systemwide manner. This especially applies to complex new therapies, imaging and drugs (which are the primary drivers of increased spending). Practical demand management is hard because we assume more healthcare equals better health, are beguiled by technology, no longer understand risk and are wedded to pathway solutions that reduce some of the intangibles of the human interaction between a patient and a healthcare professional to nodes on a decision tree from which every branch results in more to do. It is also hard because our political structures rely on promises made in a brief electoral cycle, subordinating the ability of our institutions to undertake long term planning. Complex decisions like those that are needed to equitably and ethically address demand are ignored because there will be politically unpalatable losses in the medium term while the wins may take many years to manifest. 

What’s the solution? A massive funding pivot to primary care and its ability to resolve many simple issues quickly, cheaply and effectively? Removing healthcare delivery from governmental control altogether, sacking the Secretary of State and assigning a fixed proportion of GDP for 25 years to allow long term planning? Addressing the social determinants of health: education, housing, lifestyle choice, opportunity, inequality? Robust implementation of cost-effectiveness principles in healthcare design? Public education about risk? Promotion of a stoic understanding of what it means to live a good life, knowing that death is inevitable? 

If all that seems too far outside your zones of control or influence then perhaps in your day-to-day practice take a moment to consider the things you can change. Each time you make a decision, ask yourself: is this test, treatment, referral or innovation really needed? Who am I treating, the patient or myself? Is it easier to do the wrong thing than the right thing and if so, why? Am I too busy to think about this? Am I too proud or too anxious to ask for help? We all have a role to play in identifying pointless, wasted or supplier-induced demand.  Making better small decisions every day is achievable and accumulations of hundreds of thousands of tiny marginal gains can have a big effect. This will not be sufficient on its own, but it’s necessary, vitally so.

Demand. It’s the elephant in the room of healthcare funding. Ignore it and sooner or later we’ll all be trampled. It’s our urgent responsibility as healthcare professionals to act to control demand, even if our government seems unable to.

What’s your identity?

A year or so ago I was ill. It turned out not gravely, but enough to keep me off work for several months. After an initial and shocking diagnosis, some surgery and its embarrassing aftermath and recovery, and huge support from family, friends, colleagues, clinical staff in Leeds and Maggie’s Centre, I started to feel both physically and psychologically back to normal. After six awful weeks, I was well enough to get back to work though there was still some treatment to get through and the advice I received was that I should not (must not!) go back to work until all of my treatment was complete. This advice was correct, delivered kindly and was taken, albeit grudgingly.

So toward the tail end of 2023 I found myself at home, not working for the first time in years, and wondering what to do, because all of a sudden not only had I (temporarily) lost my work, I’d also lost my identity. If I was not working as a Doctor, an Interventional Radiologist, a medical leader and all the other professional roles I had cultivated so carefully in my work over the years, what was I? What was I for?

Medical identity is ingrained into the stories we tell ourselves as a profession. It’s there in literature, drama, art and popular culture. It’s about the competition for access to a medical career, the sacrifices we make, the hours we work, the lives we affect. The narrative is that these experiences are qualitatively different from the experiences of other professionals. Any divergence from the stereotype of a heroic medical figure going the extra mile for her patients is jarring. At its extreme it can be almost indistinguishable from parody.

Is this justified? People employed in healthcare are hugely and uniquely privileged. We share intimate moments in our patients’ lives, from birth to death, before and after. We experience things no-one else does: the exquisite filigree of capillaries on the surface of a pulsing human brain; the knowledge of someone’s future before that person themselves knows, determined in the greyscale of a scan or the banal data of a test result; the attention to private confidences, fears, triumphs and insecurities. It’s in the role distinction that allows us to assault, probe and pry into the lives and bodies of our patients in ways that would result in prosecution for others.

But in many ways, we are no different from other employees and professions, contributing to society. I know many non-medical people who also competed hard to get where they are, who work much more than their contracted hours and who care deeply about what they do. Why are there not such strong identities associated with (for example) teachers, educating and enlightening generations of schoolchildren; entrepreneurs providing jobs and opportunity; lawyers navigating people through statute and caselaw; actors entertaining tens of thousands over a career or politicians struggling to lead in the face of conflicting constituencies. Comparisons of added value are destructive and obviously pointless, but there seems to be something in medical professional identification that is seen (and sees itself) as qualitatively different.

Some might justifiably scoff at this pretence, at the implied self-importance, at the hubris. This is why we laugh at the character of Dr Price in ‘Fawlty Towers’ when he demands his breakfast just because he’s medically qualified with the non-sequitur “I’m a doctor, I’m a doctor and I want my sausages”. But while Dr. Price is a pompous ass, I think most doctors internalise their profession to some extent. We rarely describe our employment as ‘a job’. It’s ’a vocation’ or ‘a calling’ with all the baggage that goes with that. Many years ago during some resilience tuition close to the end of my radiology training programme, I (with hindsight embarrassingly) admonished the facilitator for suggesting medicine was a career like any other. No, I told her: it’s part of me, it’s who I am.

So what? Does it matter what we tell ourselves? Who cares? In many ways, other than making us boring dinner party guests and drinking companions, it doesn’t. In other ways it matters hugely.

Managing change in any organisation usually involves developing awareness and urgency around the need for change and building a ‘core coalition’ of people to begin to deliver it. Change also involves ending, losing and letting go. Where people are invested personally in a project or service, challenge to that service (whether internal or external) can result in a grief reaction, beginning with denial and anger. This is amplified when the service subject to the proposed change is enmeshed with professional and personal identity. The (flawed) logic sees change as a threat, not only to professional practice but also to who you are. Change becomes personal and therefore more challenging to manage and deliver.

Some examples:

  • Pooled waiting lists challenge a surgeon’s identity that only she is capable of operating on her patients.
  • Advanced practice and physician associates challenge the identity that some jobs can only be done medically qualified people.
  • Checklists and other initiatives to flatten hierarchy and improve safety challenge the identity of doctors as the primary source of organisational clinical accountability.

These operational and governance issues are complex enough without identity complicating them further.

Much of change management (and the NHS Change Model) is focussed on understanding and articulating high-order aims to identify commonalities in purpose and to co-create solutions, but even when aims are agreed, identity can confuse the solutions proposed. No-one would argue with the statement ‘patients should not wait a long time for their surgery’ but if the surgeon frames a query about operating theatre efficiency as identity (“I am a slow surgeon” or worse “they say I’m a slow surgeon”) solutions become so much harder to enact. Opportunity is perceived as threat and self-interested or self-preserving responses, disengagement or even conflict are more likely. Progress is slowed. One of the challenges of leadership in healthcare is harnessing an overarching passion for improving patient care while still attending to the beliefs of those whose day-to-day work will be affected.

Identity is also why some doctors take complaint, criticism and inevitable error so personally (something I discussed in this blog). A criticism about who you are is bound to cut more deeply than one about a service to which you contribute, and is more likely to elicit a defensive response than productive enquiry and exploration of the cause and effort to improve.

So medical identity matters. At a personal level it matters because the one certainty of a career in medicine is that it will end: what does that leave you with if your identity is your work? At an organisational level it can hinder progress. So pay attention to your identity: medicine is (just) a hugely fulfilling, privileged and important job that done well can deeply affect the lives of many people. But when you disappear down its rabbit hole too far, when your job becomes who you are, you’ve got a problem.

I’ve been slowly realigning my identity away from my assertion twenty years ago that my job is who I am. I’m also a father, husband, son, friend, cook, host, occasional writer and enthusiastic (but average) cyclist. One of the few good things to come out of my illness was to accelerate my preparation for the day when I have to let go permanently that part of my identity that says I am a doctor.


Note on the image:

Why an iceberg? For an example see this post or search ‘Identity Iceberg’

The burden of knowledge

When I was learning my trade as an interventional radiologist, decisions seemed easy. I did what my supervising consultants told me to do. Shall I put a stent in this vessel, undertake this aneurysm repair, do this embolisation? These were easy questions, with the same easy answer. If the boss says do it, then do it. Along with this simplicity came an easy and beguiling belief that I was doing the right thing – by my patient, by wider society and by my profession. The challenges were technical. Was I a good enough operator to do the procedure well? Could I establish rapport with the patient sufficiently for them to trust me to do it? Was there a complication and if so, what had I learned from it? Nuances of hand-eye coordination, positioning, device selection, team management all developed within this context. This was the ‘cutting well’ of the old surgical adage, ‘choose well, cut well, get well’.

As I approached the end of my training, and in the early years of my consultant practice, cutting well became a given, or at least enough of a given that my complication rates did not raise concerns amongst my colleagues. I was definitely not the most skilled, patient or creative of my colleagues but I was (and I hope remain) hard working, conscientious and reflective, enough to be welcomed as a productive member of the team. 

My experience grew, and the technicalities of the procedures became easier as conscious expertise became subconscious. But the work became more difficult absent the safety blanket of a supervising colleague. Choosing well was complicated. The more I read the literature, the more difficult choosing well became and the less sure I was that my choice was correct. How does the literature I’ve read represent this patient? Is my interpretation of it right or have I got the wrong end of the stick? What about the literature I haven’t read? Should I stent this vessel? Use this device? Treat this patient? These decisions increasingly became a cognitive battle with myself: in the outpatient department, when vetting referrals or (worse still) in the middle of a procedure.

But I still felt secure that the wider perspective was that I was doing good work. I was helping my patients and was contributing to the health of the society in which I live. My patients seemed happy after their interactions with me. My outcomes were satisfactory, I was respected by colleagues for my clinical decision making. I felt fulfilled, important, imbued with purpose and professional value.

I continued reading and attending conferences and symposia. I started peer-reviewing. And like watching a play or reading a novel, I became increasingly familiar with the characters and the wider professional landscape. And slowly and surely doubt insinuated itself into my thinking. What were the motivations behind this publication? Why did they choose this study design, outcome measure or results interpretation over that one? Are there vested interests and who do they serve? The more my knowledge and experience grew, the more I became worried choosing well not only applied to some patients but to whole classes of procedure. Does the evidence justify us doing this procedure at all? And if it does, is the size of the effect sufficient for me to be clear I am helping this patient, with all the personal and professional satisfaction that comes with that. It’s hard to maintain the hubristic illusion of the brilliant life (or limb) saver as the number-needed-to-treat (NNT) creeps up. If the NNT is 10, which of the ten patients are you helping? Which are you harming? In which does it make no difference at all? What does treating this patient mean for that patient? Those patients? Everyone? I felt my identity was a house of cards, ready to collapse at any moment, like a priest losing his faith as he questions the relevance of Theology the more of it he reads.

I wondered if I was experiencing burnout, but I remained able to empathise with my patients, perhaps more so than before. I spent longer with them, discussing the uncertainties, my uncertainties, about what the right thing to do was. I think our shared decision making got better. I was not exhausted or mentally drained, but I did feel a gnawing anxiety about the cognitive gap opening between my thinking and where I had come from.

As I enter what is likely to be the final decade of my practice, I am burdened with a deep sense of uncertainty about my chosen specialty and about technocratic medicine in general.  We rarely ask the questions I am increasingly preoccupied by: what is technologically advanced healthcare for? Who benefits? What can we afford and what do we choose to afford? Why? Is nudging someone from one survival curve to another slightly shallower curve sufficient reason to undertake an intervention? I belatedly realise the specialist literature of my field often fails to consider these big issues, rather focussing on the issues of technique and narrowly defined benefit. Worse when faced with broader questions (Why? What for? So what?) they are frequently framed or perceived as a threat.

Where does this leave me? Does this realisation make me wise? A dreary cynic? A sensible counsel to the exuberance of my more youthful colleagues or a curmudgeonly Luddite to whom all progress is anathema? I shall leave it to those who know me to answer those questions. But it seems to me essential that we engage with questions about why. Until we do we risk some of modern therapeutic medicine becoming elegant technique in search of a disease.

Inch-wide mile-deep radiology

The weary joke about sub-specialisation in medicine goes that a team providing cutting edge care for big toe pathology fears obsolescence in the face of increasing big toe technical and academic advancement and eventually splits. Team hallux sinister and team hallux dexter are born.

Like all worn-out jokes, there is an underlying kernel of truth. Healthcare has become more complex. Diagnostic and management pathways have become intricate with multiple decision nodes sometimes requiring high stakes choices at pace. All this requires staff intimately conversant with the details of the pathways they provide and able to triage to other teams when they recognise a patient falling out of their expertise area: it requires sub-specialisation. Sub-specialisation has undoubtedly resulted in improvements in the care of patients in much of medicine. If you had an MI in the 1960’s you’d be put on a ward to see if you got better and maybe prescribed some calisthenics if you did. You were lucky if you got aspirin. Now with sub-specialty teams providing 24/7 percutaneous coronary intervention, secondary prevention, risk stratification, cardiac rehabilitation, delayed elective revascularization and so on, survival and morbidity following an MI are unrecognisable from the 60’s and continue to improve further.

The joke though is that the endpoint of sub-specialisation is absurdity, that there is a risk of disappearing down a technical and professional rabbit hole so deep and long that you, or your team, like teams hallux sinister and dexter become an irrelevance. But it’s not necessary to reduce to the absurd to highlight a number of risks with sub-specialisation that temper its undoubted benefits.

The first risk, and the most obvious, is that patients don’t come readily parcelled in handy chunks of pathology that fit neatly into clinical pathways. They are complicated. The 80 year old with a 6.3cm AAA may also have prostate cancer, COPD, a wife with early dementia and family that, while caring and concerned, live too far away to offer any robust practical support. We can all recall patients whose lives become dominated by increasingly frequent visits to hospital, meeting fractured and disparate healthcare teams. How do we prevent their experience of care becoming confusing and burdensome? Who is going to help them make a holistic decision about what their therapeutic priorities are when each sub-specialty team is only familiar with a little bit of their lived experience? As the population ages, complex multi-morbidity is commonplace and patient-centred practice becomes increasingly important. This requires generalists, or the active maintenance of a generalist overview. Or perhaps paradoxically, specialist generalists like POPS teams.

A second risk is that it can lead to over treatment. In the context of the joke, the teams hallux justify their setup and ongoing existence by the need to undertake increasingly complicated interventions on big toes. Need is a slippery word here (see ‘What do we mean by need?’). Is the need driven by a population health deficit which can be cost-effectively corrected, or by a professional interest in a particular niche pathology? Economists call this supplier induced demand. In layman’s terms, if you are a carpenter with a hammer, everything looks like a nail. 

Finally, increasing sub-specialisation can lead to loss of workforce cohesion and siloed – rather than systems – thinking. When colleagues have little in common, they communicate and collaborate less and can even start to perceive each other as threats. Opportunities for informal interactions are lost, relationships deteriorate and solidarity withers. This can be exacerbated by specialty teams working in geographically isolated locations. But there remain many shared challenges in the provision of healthcare that are common whatever the sub-specialist interest and which require team-working and cooperation to solve. These challenges vary in scope and complexity from the contribution by sub-specialty teams to out-of-hours or emergency general services to much wider policy or philosophical problems such as population level health interventions, social justice in health, inclusion and equity in healthcare provision or the sustainability and environmental impact of services. In my own sub-specialty of interventional radiology, conversations about these wider aspects of what we do are very much in their infancy.

In radiology, sub-specialisation is inevitable and necessary. I can perform a TACE but I’m not very good at reporting the liver MRI on which the liver lesion was diagnosed. I think the converse applies. Modern imaging is so complex, so rapidly evolving and so central to modern healthcare that sub-specialisation within radiology is a representative microcosm of specialisation within medicine in general. 

But as in medicine in general, sub-specialisation in radiology creates difficulties. Even in a large department, comprehensive cover for acute radiology, especially out-of-hours, can be complex to organise if colleagues no longer feel competent to report imaging outside their immediate area of expertise. Cover becomes a complex tessellation of overlapping subspecialty skills, supported by byzantine risk-assessed protocols for who reports what and when, creating confusion for referring clinicians and trainees alike. Or it is outsourced to an external provider willing to provide non-specialist radiology for a fee.

Sub-specialisation can also mean that some services, modalities or examinations get left behind, lost in the gaps between multiple areas of individual expertise. Plain film radiology and general ultrasound spring to mind. Finally operational pressure in one sub-specialist area (for example a particularly large backlog due to planned or unplanned leave or a scanning initiative) is difficult to mitigate if the excess workload cannot be shared more widely.

One of the roles of radiology leadership is to manage these challenges, to steer a department between the competing risks and benefits of sub-specialism vs generalism and of individual aspiration vs operational necessity. The key to this is the nurturing of teamwork and communication. Sub-specialisation within teams is something we are familiar with in all aspects of life: in sport think of the seamless drafting of a cycle team or the elegance and grace of a football team at the top of their game; in commerce think of the thousands of people with different skillsets it took to design, produce, deliver and maintain the device you are reading this on; in family think of your role: parent, lover, cook, taxi driver, breadwinner, emotional anchor, comedian or straight man? Sub-specialist cooperation is ubiquitous.

Part of the key to a successful radiology team (any team) is not that everyone does the same thing or is treated the same way. It’s the creation of a culture of professional respect, equity and understanding within which individuals or groups can enjoy and deliver their sub-specialty interests (with all the clinical and operational benefit this brings) without risking the downsides of this specialisation. A radiology team with a strong sense of overarching collaborative endeavour and collective ownership will automatically mitigate the risks of sub-specialisation, of siloed thinking, of isolationism or protectionism. It will identify the gaps and manage them.

But a strong team culture is not only built or modelled by its leadership. The members of the team also need to be willing to be players for it, to sacrifice some (not all) of their personal desire for the sake of the collective. So it’s OK to have you aspirations in the clouds, to be an expert left-big-toe radiologist, to have an inch-wide mile-deep practice some of the time. But you also need to keep yourself grounded in the general, in the shared. You may find that this can be just as fascinating, just as fulfilling. The broadening of perspective it affords as you leave your rabbit hole, see the sky and breathe the fresh air of new opportunity can be invigorating! I know: I’ve done it.

So be partially pluripotent. Don’t fall into the trap of sub-specialising yourself into irrelevance. Avoid this mainly for your own sake, but also for the benefit of your colleagues, your department and the community it serves and your profession.

Excellence, or is good-enough radiology good-enough?

Some questions:

  • You have a new MRI scanner that can provide lots of additional sequences that increase diagnostic sensitivity slightly but also increase scan time by 25%. Do you implement them?
  • A trial identifies that a new cancer surveillance protocol improves recurrence detection rates but involves twice as much imaging as the previous protocol. Do you agree to its introduction?
  • You review non-urgent overnight inpatient imaging and find that next-morning reporting rarely (but not never) results in harm. How much expensive overnight resource do you allocate to manage this risk?

The RCR Quality Standard for Imaging provides a starting point in setting out a comprehensive quality baseline for a radiology department with detailed descriptors across many domains. It describes a service fulfilling the imaging need of the population it serves quickly, safely, effectively, collaboratively and with dignity. A excellent imaging service should (arguably) also provide its workforce with a career structure that allows them to personally and professionally flourish with interesting and stimulating work and the opportunity to innovate or spread the innovations of others.

Does an excellent service require decisions such as the ones I’ve outlined above? Could it deliver more out-of-hours reporting without impact on daytime capacity? Would it have the redundancy to increase scan times for some imaging by 25% and the funding to cope with twice as much imaging as the previous protocol? Or would a service implementing such change without challenge be an un-fundable and undesirable fairytale lacking understanding of the wider societal context of resource allocation?

Where do we draw the line at good-enough? Does it matter if a particular decision might expose a small number of patients to harm if it will mitigate other operational risks? Is the potential stifling of innovation (for example by not funding new devices or drugs, preventing the use of new imaging sequences) reasonable if money is productively diverted elsewhere? Does a culture that accepts good-enough inevitably eventually lead to degeneration into a rump or failing service? What  do our patients and their relatives think? What compromises are they willing to make or willing to allow us to make on their behalf?

I doubt many go into healthcare to offer a good-enough service. There is no inspiring vision in the average nor stirring narrative in the adequate. Healthcare is seen as urgent, heroic, saintly, uncompromising: ’Going the extra mile’, ‘Doing the right thing’, ‘Pulling out all the stops’. If we place limits on healthcare professionals’ autonomy to manage patients and services as they think appropriate, does that reduce them to highly-skilled pieceworkers, moving from one patient to the next, constrained by the mandates of a system they have limited power to alter. What does that do for professional satisfaction, identity and social role?

And yet much of healthcare is repetitive and mundane piecework. In radiology it’s the backlog of thousands of routine scans, the GP reporting basket, the waiting list for an image guided biopsy or a fistulaplasty. This work is not sexy or cutting edge but that does not mean it’s not fundamental to what we do and who we are. And of course, each of these mundane events is a source of considerable anxiety, and may even be life changing, for the patient involved.

Lots of questions. Paragraphs of them. To reach an answer needs an exploration of ethics and morality, an understanding of organisational psychology and a wider conversation about what we consider important. Philosophers have wrangled with these big questions for centuries without definitive conclusion. Yet decisions, like the three examples at the top of this blogpost, need to be made and need to be made now. They will not wait for a psycho-sociocultural analysis of how modern society approaches moral philosophy or even for a cost utility analyses. How then do we make them?

The answer, I think, is to recognise that while the questions (and many others in healthcare) seem simple, almost binary (implement or not), in fact they are wicked. A wicked question has a number of characteristics including the lack of a clear definition, the involvement of many stakeholders (with different priorities, ethics and worldviews) and the lack of clear criteria for determining whether the answer arrived at was ‘correct’. For example, implementing the new surveillance program described above might be enthusiastically welcomed by patients with the particular cancer involved, but not by others who see resource diverted. It might we embraced by clinicians excited by the opportunity to improve their service, or resented for the increased workload. A review describing the number of additional recurrent cancers identified and the number of additional scans undertaken might equally be interpreted as identifying a great step forward in care or a colossal waste of money.

Management of wicked problems can be undertaken in a variety of ways from the imposition of a solution by those who wield power (and who may- or may not- own the consequences of their decision) to broad collaboration and iterating to an outcome where the driver is agreement on a solution rather than the solution itself. We might want to implement the new cancer follow up protocol, or we might not, but all stakeholders should feel able to contribute to the decision and at least be satisfied that their voice has been heard and understood, even if the ultimate decision made was not one they favoured.

So where does this leave excellence in radiology? 

It means that excellence is not fixed, it’s constantly moving, changing and adapting. It requires ongoing conversations: with the people who deliver, pay for and organise the service; with the people who use it; and crucially with the people who experience it – our patients. It means exploring what we can offer and then delivering it well. There may be agreed metrics or standards and these may change over time – but these metrics need to be meaningful for everyone, else they will be resented or ignored. Excellence, however, does not mean we need to do or offer everything. What we choose to do is up to us to decide. Good-enough can be, and often is, excellent. 

This collaboration and shared purpose is the protection against the professional disenfranchisement associated with the mundane. Feeling part of a bigger whole, of a movement, drives engagement and job satisfaction, as the (well rehearsed and possibly apocryphal) story about President Kennedy and a cleaner at NASA illustrates. The RCR QSI document sets exacting standards for good-enough which protect against mediocrity. Collaboration in their implementation and beyond will drive services to be better, not worse.

Excellence is not an endpoint, it’s a process. It’s bigger than the individual decisions made about whether or not to do a particular thing. Decisions about increasing our sequences, adopting a new surveillance strategy, resourcing overnight reporting and a myriad of others require us to work together.

Working together for a common goal. That’s excellence in radiology.

Getting the basics right

It’s Thursday morning in late 1999, a grey gloomy day in London, the clouds heavy with the threat of another grimy shower. The celebrations of the forthcoming millennium seem a long way off, tainted by the fact that I will be on-call for it, and by an ongoing row stoked by the right wing press about whether NHS staff working New Year’s Eve should be entitled to a millennium bonus.

In an anonymous meeting room, bright with fluorescent light beneath the ubiquitous suspended white plastic tile ceiling, a team is gathering. Many hold coffee in a paper cup or arrive still in their overcoat, damp from the drizzle or the absorbed perspiration of the underground commute. Some are glad to be there, happy to have a temporary break from the monotony of anther ward round and the jobs list. Some are perhaps frustrated: they would rather be operating. Some are anxious: they have a presentation to give.

Seats are taken. Plastic chairs rattle as they are removed from a stack and scraped across the Formica flooring. As with any such meeting, at any any time in history and anywhere in the world, the unwritten hierarchy becomes manifest. Juniors at the back. Senior staff at the front. At precisely 8:07 the meeting is called to order. The bimonthly neurosurgery audit meeting has begun.

A latecomer arrives. A junior doctor – a mere moment of disapproving glances is enough though for them to know their card is marked. I’m glad it’s not me this time. There is a spare seat at the front. They make their way to the back, and stand.

This then is the unlikely setting for one of the most formative and enlightening moments of my career.

The theme for the morning is head injury. There is a useful educational session on different kinds of intracranial haemorrhage and how to distinguish them given by one of my junior colleagues. It is well received with murmurings of assent from the front. 

The Professor then stands up. He gives a talk about invasive measurement of the pH in an injured brain and using it as a guide to alter management. It’s very complicated: there are lots of slides and graphs. There is then a lengthy discussion at the front about which bit of brain the probe should be inserted into, when and what to do with the results. I’m lost and my attention drifts. 

Then an anaesthetic registrar stands up. He’s nearly finished his training – ‘cooked’ in the slang – and will be a consultant soon. He seems unbowed by the situation. His presentation is brief and simple. It’s about oxygenation in patients with head injury on the high dependency unit. His audit occurred almost contemporaneously with the Professor’s study on pH. The summary: we are not very good at maintaining oxygenation and poorer oxygenation is associated with poorer outcomes. He sits down. Then… nothing. There is none of the animated discussion the Professor’s talk stimulated, no discussion of how we might do this simple thing better, of what that might need. The anaesthetist does not seem surprised at this. Perhaps he’s been here before.

But I am amazed. The juxtaposition between the two presentations could not have been more stark. The interest was all in the complex, experimental and at best marginally beneficial intervention rather than the ensuring of simple, established best practice. I think about asking a question about this but decide not to: it seems unwise. I still have a few months to go before the February changeover and I am interested in neurosurgery as a career. I may need a reference and am not sure how the question will land. I do ask my supervising consultant about it later: a thoughtful man for whom I’d developed a lot of respect. He’d made the same observation, but decided (as I did) that the social circumstances of the meeting precluded a wider discussion about how we were not getting the basics right.

This event made a deep impression on me. I can still recall my incredulity, bordering on anger. It may be that this was misplaced. Perhaps substantial effort had been made to address the oxygenation of head injured patients over the years. Maybe there was the desire to just discuss something new, rather than something perhaps well known and difficult. Something that had been discussed on many occasions before my arrival and would be again, long after I had left. But whatever had been tried, the audit suggested it wasn’t working.

I am reminded of this episode by a friend who I met years later by chance in Barcelona. I was there for an interventional radiology conference. He was there for a conference on respiratory medicine. He described a lecture on checkpoint inhibition in lung cancer chemotherapy to a packed auditorium: standing room only, people sitting on the steps of the aisles in concentrated attention. This was immediately followed by another in the same hall for which only a handful of the audience remained, the rest trooping out as the moderator protested. The topic of the second lecture: helping people to stop smoking.

Get the basics right: they are necessary and are often sufficient. The rest can follow.

NHS workforce and the reality distortion field.

The process of designing the first Apple Macintosh computer in the early 1980s was an arduous one. The exacting demands of Apple co-founder Steve Jobs resulted in his employees and colleagues describing a ‘reality distortion field’ around him and the people who came into his orbit, within which the impossible became possible. Rectangles with rounded corners when the processor couldn’t draw a circle? No problem. A device with a footprint smaller than a phone book when everything else was three times this size? OK. Shave half a minute off an already streamlined boot process? Yeah, we can do that.

Jobs was able to bridge the gulf between expectation and reality by the clarity of his idea assisted by the sheer force of his personality, his drive, his obsession and a large dose of behaviour one might describe as bullying.

In today’s NHS we see a huge gulf between expectation and reality. Amongst other laudable aspirations NHS England [NHSE] expects to eliminate elective waits of over 65 weeks by March 2024 and increase diagnostic activity to 120% of pre-pandemic levels by April 2023. There will be improved cancer waiting times and outcomes, delivery of 50 million more GP appointments, upgraded maternity services and more, all delivered within a balanced budget.

And yet as I write, emergency departments are full to overflowing and secondary care is snarled up as social care cannot take discharges. High cost resources like theatres stand idle as hospitals grind to a halt. Primary care is drowning in demand. Much infrastructure is ageing. Estate is frequently tired, cramped and unfit for purpose. In this context, a reality distortion field with the metaphorical power of a black hole is required to make NHSE’s objectives seem even remotely achievable.

There are things that can be done: waste can be reduced and unnecessary bureaucracy eliminated; skill mix can be improved and workforce better deployed; estate can be upgraded flexibly to allow for new ways of working; services can be made more responsive to the needs of the people the NHS serves. Perhaps demand or public and political expectation can be managed. Maybe artificial intelligence or other technocratic solutions can finally deliver on their promise. We can refresh our NHS and make it comparable again with the best of our neighbouring nations.

To achieve all this requires money. This is necessary but insufficient. It also requires people.

Without a motivated, engaged, enthusiastic, driven workforce, recovering from the current crisis will be impossible. It’s the staff of the NHS and social care sector who identify the blockages and inefficiencies and create the solutions needed to improve at all levels: from district nursing team to quaternary hospital service, from clinic to Integrated Care Board. This is not a new concept: Kaizen methodology with continuous improvement driven by all staff is well established in business and healthcare. It is the staff who deliver.

Jobs recognised the importance of people in delivering his vision. He surrounded himself with people he described as his ‘A’ team. They achieved what they did because while he was a martinet character, difficult to work with, prone to bouts of anger, rudeness and extreme condescension he was also inspiring, he imbued loyalty and belief. People wanted to work for him, to deliver for him.

Given the strong vocational ethos in the NHS workforce, it should be easy to motivate its staff. But instead I perceive a disillusionment and learned helplessness that I have never known before. This is corrosive to initiative and problem solving. Motivating the workforce means paying people appropriately, recognising that pay and compensation have a salient effect on morale and on the recruitment of new colleagues and the retention of old ones. It means publishing a long overdue workforce strategy. It means listening, and understanding the daily frustrations that erode professionalism and vocational drive. It means appreciating that working in ageing buildings with ageing equipment will inevitably breed apathy. It means transformative investment.

But more than this the NHS needs a transformative vision, akin to that seen at its inception. This means having the bravery and honesty to start a public discourse on how to fund the NHS and social care long term: what we can (or choose to) afford as a country and what we cannot (or choose not to). It means confronting difficult policy decisions about cost-effectiveness and service rationing with public, professionals and industry. It means addressing both demand for- and supply of- healthcare. Everyone I know in the NHS recognises the fact that we cannot go on as we are spending more and more on increasingly marginal outcomes.

And this is where the reality distortion field can help: because with the development of a transformative vision and a clear commitment to transformative investment I believe the NHS’s staff will deliver the solutions required. It has happened before and can happen again. Even before the money flows, the idea that the government understands and is committed to action will empower the workforce. It will allow the distortion field to develop and the gulf between expectation and reality to be bridged. But until the vision is developed and the investment begins there will be no reality distortion in the NHS. Just a grim reality.

Where might the vision come from? It’s clear not from our current government who seem to only have a wish-list of near-future outcomes expedient to help with their prospects at the next general election. To me, the only option seems to be a long term collaborative effort across successive Parliaments and political ideologies and involving all public, private, patient and professional stakeholders to co-create it. Whether there is the political will, executive structure or inspiring leader to facilitate this remains to be seen. Steve Barclay is not Steve Jobs.