Omega-3 isn't very effective: using prescribing data to explore the impact of trials, reviews, and guidelines
We’ve been thinking in the Bennett Institute about doing stories, using our prescribing data, to go with landmark clinical trials and systematic reviews. Here’s an example.
A new systematic review published this week in JAMA shows that Omega-3 “fish oil” pills don’t really help improve cardiovascular health. As a systematic review, it’s a very useful overview of previous existing research. Perhaps reassuringly, as that evidence accumulated over time, clinicians were already changing their prescribing behaviour. Here’s the long-term NHS prescribing data from our database: NHS doctors gradually began to reduce prescribing of omega-3 a long time ago. Peak omega-3 was 2011!
Anyone interested can explore that long-term prescribing trends data, or indeed any trends data, for any treatment, using our new fabulous explorer for long-term prescribing trends, on OpenPrescribing here. We’ve aggregated all the national prescribing data back to 1998, correcting for inflation and population change, and put a lovely user-friendly front end onto it. Our paper on that tool has just been accepted in BMJ Open, with various research insights; there are lots of exciting uses, which we’ll blog when it’s published. We also have a paper under review about using interrupted time series analysis to assess the impact of evidence and guidelines.
But using our local prescribing data, you can also drill down to individual CCGs, and even GP practices, to see where information has not penetrated, or at least where prescribing behaviour has not changed. For example: doctors on the Isle of Wight are still prescribing omega-3 at very high rates. They are a seafaring people, which perhaps explains their enthusiasm for omega-3 fish oil pills? Here they are, on a nice map, built in a few seconds, using the OpenPrescribing analyse page. On that page, you can also see time trends, and watch the rest of the country change their prescribing behaviour, while the Isle of Wight stay still. We have lots more on the topic of variation in change in clinical practice over time, which we’ll blog when the papers are up, soon!
So what should we do with all this data?
Good question. I have a couple of thoughts.
Firstly: perhaps we should include it in reports on evidence. RCTs and systematic reviews are practical documents. They are there to support practice. So here’s a suggestion: whenever a systematic review, RCT, or guideline, is published about a treatment, then the paper could also include data showing variation in existing practice, and long term trends in use, using tools like ours?
This data does, after all, tell us a lot about the impact and urgency of the data, and the implementation challenge ahead. On the one hand, a systematic review might say: “the evidence now shows that everyone should use X, but they already are! great news!”. Or it might say: “everyone is using X, but they should be using Y! Quick everyone, how are we going to change practice, and how will we measure success in this change?”
Yes. Secondly: maybe organisations like NICE could work with us, or use prescribing data themselves, to enhance their work. We can model the impact of new NICE guideline recommendations on NHS spend, by looking at current treatment choices, and working out how much more it will cost if everyone changes to the new recommended treatment. But it’s also useful, again, to show how current practice deviates from the new vision of best practice. That’s what tells us about the implementation challenge ahead, changing behaviour across the NHS in light of a new guideline.
There’s much more to come from us on different uses of this data. For now, let us know if you think these suggestions are useful; and if you’d like to see more data blogs from us to accompany newly published trials and reviews. Perhaps, as an EBM wonk, you’d like to write this kind of data-blog for us, or for your own blog?
As always: get in touch!