The government’s antibiotic prescribing trial proved that a letter can change behaviour. It didn’t prove that changing behaviour is enough. Here’s what a system-level intervention would look like instead.
The BETA “Nudge vs Superbugs” trial was, by its own measures, a success. A peer-comparison letter sent to high-prescribing GPs reduced antibiotic prescriptions by an estimated 126,000 over six months. Cheap, scalable, and statistically robust.
But as explored in the companion piece, the trial had a structural blind spot: it measured prescribing rates, not patient outcomes. It nudged individual behaviour without changing the system those individuals were working inside. And it offered no downstream tracking to confirm that prescribing less actually meant prescribing better.
That gap points to a deeper problem in how behavioural interventions get designed. When the goal is to shift population-level numbers quickly and cheaply, the temptation is to reach for the lightest possible tool. Sometimes that’s appropriate. In healthcare, where clinical decisions interact with patient safety, it often isn’t.
From nudge to system: introducing AIM
The Antibiotic Intelligence Model (AIM) is a proposed alternative — not a replacement for peer comparison, but a structural layer beneath it. Where the BETA trial targeted the doctor’s identity, AIM targets the decision environment the doctor is working in.
The core mechanism is a script-gating system integrated directly into clinical software. When a GP is about to prescribe an antibiotic and diagnostic certainty is low — no confirmed bacterial infection, no high-risk patient flags — the prescription is automatically deferred rather than issued immediately. The patient leaves with something concrete in hand: a recovery plan covering rest, fluids, and over-the-counter options, along with SMS follow-up prompts over the next 48 to 72 hours.
If PCR testing subsequently confirms a bacterial infection, or the GP flags clinical vulnerability, the script is released automatically. The delay is the intervention — not a refusal, but a pause that allows new information to arrive before the decision is locked in.
Critically, GPs can override the delay at any point with a brief clinical justification. This isn’t a system that overrules doctors. It’s one that supports their judgement by building in the space that time-pressured consultations currently don’t have.
Closing the gaps the nudge left open
AIM addresses three specific failures of the original trial design.
Diagnostic support. The BETA trial assumed GPs had sufficient information to make better decisions — they just needed to be motivated differently. AIM starts from the opposite assumption: that the information environment is the problem, not the doctor’s intentions. By integrating with testing workflows and flagging high-risk patient profiles, AIM gives GPs something to act on beyond peer comparison and professional guilt.
Patient reframing. One of the underappreciated challenges of prescribing restraint is that patients often arrive expecting a script. “No prescription” can feel like dismissal. AIM reframes the deferred prescription as an active clinical decision — here is your recovery plan, here is what to watch for, here is when the script will be available if you need it. The patient leaves feeling cared for rather than turned away.
Identity-aware support. The original trial sent the same letter to every high prescriber. But a young GP in a regional town managing patients who’ve driven two hours for a consultation is in a different position to an established urban practitioner. AIM includes optional support tailored to GPs who may face higher patient pressure or lower access to diagnostic resources — making the intervention responsive to the contexts that make restraint hardest.
What a proper trial would measure
A rigorous test of AIM would run as a cluster-randomised controlled trial across GP practices, with two arms: standard care versus AIM integration. The trial would run for twelve months, with outcome data collected across three subsequent flu seasons to test persistence.
Primary outcomes would include prescribing rate per 1,000 consultations, re-consultation rates within 14 days, and time-to-recovery. Secondary outcomes would capture GP confidence in withholding, patient satisfaction, and diagnostic test usage — the measures the BETA trial never touched.
This is a more expensive and complex trial to run. But the question it answers — did the intervention improve care, not just reduce a number — is the question that matters when the stakes involve patient health.
The argument for going further
Behavioural interventions are most powerful when they work with the grain of human decision-making rather than against it. The BETA trial did that at the level of professional identity. AIM does it at the level of the system — building the pause, the information, and the support directly into the moment of decision, rather than hoping a letter sent months earlier is still influencing behaviour in a high-pressure consultation.
One-off nudges are fragile. Systems that change the environment are not.
References: BETA (2018); Chater & Loewenstein (2022); Hallsworth et al. (2016); Wagner et al. (2024); Thaler & Sunstein (2008)
This piece is the follow-up to 126,000 Fewer Prescriptions. But Did Anyone Check on the Patients? in Kynd Thoughts — which examines what the original trial missed.


