Getting to grips with data and analytics could revolutionise the business of government, but presents challenges as well as opportunities. Recent research from Policy Exchange points a way forward.
The modern world generates a staggering quantity of data. Counting across all forms of storage, from mobile phones to DVDs and hard disks, the world’s capacity to store information has this year reached 2.5 zettabytes. If we stored all this data on DVDs and piled them up, the stack of discs would stretch all the way to the moon and half the distance back again.
The business of government is no exception to this explosion in data. Extraordinary quantities of data are amassed in the course of running public services – from managing welfare payments and the National Health Service, through to issuing passports and driving licences. Regardless of the stance a government chooses on openness and transparency, an abundance of data and computing power gives the public sector new ways to organise, learn and innovate.
The opportunity for public service transformation is real. For citizens, the application of data, technology and analytics has the potential to save time and make interacting with government a much smoother experience. This runs across the whole spectrum – from pre-populating forms rather than asking for the same information twice, through to personalising welfare to help people access the support they need.
There is also significant scope to save money. The government’s annual budget is around £700 billion, so even incremental improvements in productivity can add up to big savings. We already know that fraud in the public sector costs around £21 billion a year, a further £10 billion is lost to errors, and £7-8 billion lost in uncollected debts. And we know that the tax gap – the difference between theoretical tax liabilities and what people actually pay – is around £35 billion. So there is clearly potential to make progress.
Of course, data and analytics technologies alone are not a silver bullet for transforming the public sector. Underlying data and statistical issues like quality, standards and bias still need to be recognised and addressed.
Governments must have the capability to conduct, interpret and consume the outputs of data and analytics work intelligently. The role sometimes described as the “data scientist” spans a range of disciplines, from computer science and quantitative methods, to the ability to tell stories and visualise complex relationships. And this is only partly about cutting-edge data science skills. Just as important – if not more so – is ensuring that public sector leaders and staff are literate in the scientific method and confident combining data with judgment.
Governments will also need the courage to pursue this agenda with strong ethics and integrity. The same technology that holds so much potential also makes it possible to put intense pressure on civil liberties. Both governments and businesses are exposed to tensions when attempts to extract value from data collide with individuals’ wishes not to be tracked, monitored or singled out.
Our research delivered two main recommendations for government.
First, an elite data team should be set up in the Cabinet Office, with responsibility for identifying big data opportunities and helping the public sector to unlock them – be they in central departments, local authorities or elsewhere. In its first year it should look to identify savings and benefits for central government, over and above existing plans, worth at least £1 billion. The team should take a lean, agile approach, modelled on lessons learned from the Nudge team. This is emphatically not about starting with a large, lengthy IT programme.
It should also have the job of spreading awareness, understanding and demand for data and analysis in the public sector. If successful, data science should be formalised as a professional grouping inside government, alongside existing tracks for economists, statisticians and operational and social researchers.
Second, the government should adopt (or possibly even legislate) a Code for Responsible Analytics, to help it adhere to the highest ethical standards in data use.
Important elements of such a code might include:
1. Putting outcomes before capabilities. In other words, data and analytics capabilities should always be acquired on the basis of a clear and openly communicated public policy justification, and never for their own sake. Where such capabilities are no longer needed, they should be surrendered.
2. Respecting the spirit of the right to privacy. Auxiliary data and analytics should not be used to infer personal or intimate information about citizens. Where this sort of data is needed, consent should be sought explicitly.
3. Failing in the lab, not in real life. We suggest using a sandbox environment and synthetic data to test all major big data initiatives. This would enable initiatives to be subjected to scrutiny and peer review before a decision is made on implementation.
The last of these is particularly important. We all have difficulty drawing a clear line about how far it is acceptable for government to go with data and analytics work. For the most part we are best at recognising decisions that overstep the mark when we see them. Far better to do this in a safe environment than to make a mistake that might set the data agenda back significantly.
The elements of our code are intended as a starter for debate. We urge government, parliamentarians, citizens, civil society, the business community and others to debate, challenge and improve on our proposal.
The prize at stake from making better use of data in government is large. We need to accelerate practical, radical efforts to capture it, whilst being mindful, always, that we do not sacrifice our integrity along the way.