In 2017, the Australian government ran a behavioural trial that reduced antibiotic prescriptions at scale. It was celebrated as a success. There’s just one question nobody seems to have answered.
In June 2017, letters landed on the desks of Australia’s highest-prescribing GPs. The message was simple and deliberately uncomfortable: you prescribe more antibiotics than 80% of your peers.
That’s it. No new guidelines. No training. No system change. Just a single social comparison, designed to activate something doctors care deeply about — their professional identity and reputation among colleagues.
It worked. Over the following six months, an estimated 126,000 fewer antibiotic prescriptions were dispensed. The trial, run by BETA (the Behavioural Economics Team of the Australian Government), became one of the most cited examples of nudge theory applied to healthcare. A light-touch intervention. Measurable results. Low cost.
By almost every metric the trial set for itself, it was a success.
Here’s the problem. The trial only measured what it wanted to find.
The primary outcome was prescriptions dispensed per 1,000 GP consultations. That number went down. But prescribing less is not the same as prescribing better. The trial collected no data on what happened to the patients who didn’t get antibiotics.
Were they fine? Did they recover without treatment? Did some of them come back sicker? Did a subset end up hospitalised with conditions that antibiotics might have addressed?
Nobody knows. Because nobody looked.
This matters more than it might seem. 2017 was not a typical flu year. It was one of Australia’s most severe influenza seasons on record — dominated by the H3N2 strain, which historically causes higher rates of complications in the elderly and immunocompromised. NSW recorded over 9,300 influenza-related hospital admissions that year, up sharply from 3,560 the year before. Influenza-related deaths ran at 7.9 per 100,000 population, compared to 2.8 the previous year.
These were the exact months the trial was running.
That doesn’t mean the prescribing reduction caused harm. Many of those hospitalisations would have been viral, and antibiotics wouldn’t have helped. But it raises a question the trial design simply cannot answer: in a year of unusually high clinical complexity, did some GPs withhold antibiotics from patients who genuinely needed them?
The nudge worked on the wrong level
This is what researchers Chater and Loewenstein call an I-frame problem — a solution designed to change individual behaviour without addressing the system those individuals are operating in. The letter told doctors to prescribe less. It didn’t give them better diagnostic tools, more time with patients, or a structured way to communicate “no prescription” in a way patients would accept.
In 2017, rapid PCR or CRP testing — the kind of diagnostic support that would help a GP distinguish bacterial from viral infection — was rarely subsidised. Doctors were being nudged toward restraint while still flying largely blind. A GP facing a high-risk elderly patient with a chest infection, under time pressure, with no diagnostic certainty, is in a fundamentally different position from the abstract “high prescriber” the letter addressed.
There’s also the spillover question. Did patients simply go to a different GP? Did pharmacists pick up the gap, or add to the confusion? The trial had no mechanism to track any of this.
A headline number without a safety net
None of this is to say the BETA trial was wrong. Antibiotic resistance is one of the most serious public health threats of our time, and reducing unnecessary prescribing matters enormously. The trial demonstrated, convincingly, that a well-designed nudge can shift behaviour at population scale for very little cost.
But demonstrating behaviour change is not the same as demonstrating improved outcomes. And when the behaviour being changed sits inside a clinical relationship — where getting it wrong has real consequences for real patients — that distinction matters.
126,000 fewer prescriptions is a compelling number. What we don’t know is what sits behind it.
References: BETA (2018); NSW Health (2018); Chater & Loewenstein (2022); Hallsworth et al. (2016); Thaler & Sunstein (2008)
Interested in what a better-designed intervention might look like? Read the follow-up piece: When a Nudge Isn’t Enough: The Case for AIM over in Kynd Policy.


