This article is part of a series: Co-piloting

In July 2021, we launched our OpenSAFELY co-pilot programme (you might have read Millie’s blog about it here). This programme provides all new users of the OpenSAFELY platform dedicated 1:1 support in the early days of their research project.


Co-piloting meets a number of needs for OpenSAFELY. Firstly, we can help to improve the productivity of the platform, by helping users to become rapidly productive from the moment they arrive. We really care about productivity: that’s one reason why, for example, we also have a comprehensive online technical user manual, to help our users be productive. Co-piloting is also helpful when new users have strong skills in some, but not all, of the relevant areas needed to deliver an output: for example, some users are experts in computational data science, and can use GitHub proficiently, but have less experience of working with electronic health records. Lastly, we get to learn about our users, and how they work in the platform, to help us develop the platform further.

The dedicated support comes in the form of an OpenSAFELY researcher, so someone who has experience implementing studies in OpenSAFELY and also someone who has an existing relationship with the software developers who grow and build OpenSAFELY. Following this first period of intense 1:1 support, pilots and co-pilots remain in touch via Slack or our discussion forum (the co-piloting programme is described in more detail in our docs).

We’re now well and truly up and running, with 55 co-piloted projects at various stages of development. We thought it would be a good time to take stock of how the programme has changed over time, what we’ve learned and where we see the co-piloting programme going in the future.

What’s new?

As a team of co-pilots, we regularly take time to discuss how co-piloted projects are going and what we can do to help pilots and co-pilots make the most of their time together. More broadly, having a wider pool of users—the pilots—working on our platform has helped bring about improvements to the system generally.

We have improved our output checking process

All outputs generated by OpenSAFELY are subject to careful output checking, to assess and then minimise disclosure risk, including those generated by co-piloted projects. We have developed an output checking workflow which allows pilots to request a release; creates a GitHub issue where output checkers can discuss any issues associated with that release; and then releases these files to our jobs site (subject to approval of course).

This workflow logs the process from start to end and we are now harvesting stats with regards to turn around time so that we have an even better handle on how it works and where we could improve. More on this in our ongoing series of blogs on output checking!

We have quarterly user group meetings

Our aim has always been to provide a platform that facilitates research beyond just our group, so we love to hear about how OpenSAFELY is being used by others! We now hold quarterly user meetings where users (mostly pilots, but sometimes Bennett Institute researchers) are invited to give a short presentation about their project, plans, results or to share learning about how they have implemented something in OpenSAFELY. We’ve also found that this is a useful forum to inform users about updates to the platform (e.g., new functionality or bug fixes) or updates to our COPI agreement.

We have been talking (in depth!) to users

Catherine, our product manager, joined us in the autumn of last year. One of the things that she has introduced is more in depth consultation in interviews with users; many of these users have been former or are current participants in the co-piloting programme. These interviews have had a direct impact on prioritisation of work for our tech team, highlighting particular areas in which we could bring more value to our users.

We have added new functionality (and with that new documentation)

OpenSAFELY continues to be developed in response to user requirements, so new functionality is continually being added to the platform. For example, in the last few months we have launched a new process whereby pilots can request that their repo is made public. This is more than just flicking a switch in the settings of a GitHub repository: we also ask users to do a bit of a GitHub spring clean and we collect as much information as we can to fulfil our commitment to transparency. Again, pilots make up a large body of invested users and we will be inviting feedback from them to help improve and shape this process in the future.

We have better internal and external resources

Within a few months of launching, we carried out a review of our co-piloting programme. As a result, we developed several new resources. Some of these were externally facing: a slide deck introducing OpenSAFELY to pilots being onboarded (watch our video!) and a structured approach to weekly meetings to help focus efforts on releasing outputs. Some of these were internal: development of an interactive dashboard to make co-pilot assignment as easy as possible and regular co-pilot “Deep Dive” meetings with a standing agenda to provide support to co-pilots. These resources were designed to streamline the process for both pilots and co-pilots, identify blockers (and solutions) early and keep projects on track to release within the first four weeks of co-piloting.

What’s in it for us?

As with the development of any successful digital tool, the users shape innovation and inform prioritisation. They help us improve our documentation, to develop new functionality, and ultimately help us meet the needs of the EHR research community. Several of our datasets have been incorporated as part of co-piloted projects (e.g., ONS-CIS, Renal Registry, ISARIC) and new functionality (e.g., support for RCTs). We actively encourage our users to submit bug reports and feature requests: in this PR we followed up on a suggestion from our collaborators in Bristol to allow dummy data generation via a poisson distribution. We have established new and productive collaborative relationships, with several of our users returning to run new projects.

What’s next?

Hopefully, more of the same, but even better!

One thing on the horizon is giving co-pilots access to high resolution “telemetry” data on our jobs server, that was previously only accessible to a few in our tech team. Co-pilots will be able to delve into several different dashboards that tell us, for example, whether the server is blocked by a small number of huge, very slow jobs or whether it is moving quickly through a queue of smaller jobs.

We also have ideas in the pipeline to generate more video content on our YouTube channel (for example, a “Live OpenSAFELY study build” or “How to navigate the OpenSAFELY docs”) that will help pilots make the transition from our “Getting started guide” to constructing their own study.

Get in touch, we’d love to hear from you.