
It’s become possible to collect masses of data about people and to use that data to make predictions and decisions. It could be possible, for example, to collect data about how a person uses a touchscreen to predict whether they have certain health conditions, or to use social network data to predict likely success in paying back a loan or holding down a job.
Those predictions can turn into real-world decisions that affect people: decisions whether to offer insurance, a loan, a job or a training opportunity. And increasingly those decisions can be automated, with limited input from people. This raises very important questions about whether we can trust these systems, and how we exercise oversight.
There’s massive opportunity, but also risk. We know that the systems can be flawed in ways that discriminate against already disadvantaged groups — rating people of colour as higher reoffending risks in criminal justice systems, for example, or offering different job advertisements to men and women. Or these systems can be used to personalise news feeds, advertising, services – even the price you pay, raising critical issues of fairness.
So if we’re going to use data, we have to do it right: we have to find ways to make sure it’s fair, legal and consistent with our values and with our concern for people and their rights. That is true whether we’re talking about health, transport, education or social services — or indeed in any area where automated decision-making has an impact on people.