The robodebt scheme was an example of a government “beta testing” algorithms on its most vulnerable citizens and failed to properly account for fundamental principles of accuracy, accountability, and fairness, according to the former Australian human rights commissioner.

Ed Santow, who spent years warning of the dangers of poorly implemented technology as Australia’s human rights commissioner, said on Wednesday that Australia’s lack of strategic artificial intelligence (AI) skills was “incredibly dangerous” and could lead to more programs like the “robodebt disaster”.

Ed Santow
Ed Santow: The robodebt disaster came from the government “beta testing” technology on its most vulnerable citizens.

The Online Compliance Intervention system, better known as robodebt, was launched by the Coalition government in 2016. It used an algorithm to average out a welfare recipient’s yearly income using data from the ATO and cross-matched this with income reported to Centrelink.

The scheme raised $1.7 billion in debts against 443,000 people and regularly incorrectly matched data, leading to inaccurate or non-existent debt notices being issued.

In awarding victims of the scheme a $112 million settlement in  June, a Federal Court judge blasted the program as “unlawful” and a “shameful chapter” in Australia’s social security history.

Mr Santow was highly critical of robodebt as Australia’s Human Rights Commissioner during his five-year term, which ended in July.

Now leading a responsible technology program at the University of Technology Sydney as industry professor of responsible technology, Mr Santow said robodebt was an example of an AI or algorithm-based technology being deployed by governments without adequate ethics, testing or recourse for people impacted by it.

“[AI] is often pretty experimental technology,” Mr Santow said during a University of Technology Sydney (UTS) responsible AI webinar on Wednesday.

“Historically, not just in Australia, but a lot of countries have got a very bad record of beta testing new technology on literally the most vulnerable citizens in our community.

“And, frankly, that’s what seems to have happened with robodebt.”

Mr Santow said the automated technology should not have been trialled on vulnerable citizens without stringent safeguards, and potentially not at all.

Any deployment of AI or algorithm-based technology needs to satisfy fundamental principles of accuracy, fairness and accountability in order to minimise risks, Mr Santow explained.

But robotdebt had not adequately addressed any, he said.

Accuracy is “crucially important” when governments make decisions using technology, Mr Santow said, but the robotdebt scheme was found to have very high rates of error, including raising debts with individuals who owed the government nothing.

The scheme also lacked simple redress mechanisms for when errors were made, with victims forced to “untie the Gordian Knot” with lawyers to get any sort of accountability, the former human rights leader said.

Mr Santow also questioned the overarching fairness of a scheme that raised relatively small debts from many years prior.

“Sometimes when you’re cutting money back from someone that you may have overpaid $100 five, six, seven years ago, maybe it’s actually not really fair to claim that money back.

“So, you need to have an overarching kind of look at the system that you’re creating and making sure that it really works fairly for people.”

Mr Santow said Australia must learn the lessons from robodebt, but this will require more people with “strategic” AI skills to avoid similar problems.

The CSIRO has forecast that on current trends Australia will be 70,000 graduates short of what is required to meet demand for technical AI skills like data science by 2030. Mr Santow said the technical skills challenge is well understood and several moves are being made to address it.

But another “incredibly dangerous” skills gap of strategic AI expertise is widening, he said.

Mr Santow pointed to research showing organisations are under pressure to deploy AI technologies but most had almost no idea how to do it correctly. The pressure would lead to irresponsible and dangerous use of the technology, similarly to robodebt, he said.

The former human rights commissioner joined UTS this month to lead a responsible technology initiative. He will lead a push by the university to educate companies and government agencies from leadership down on the risks and opportunities of AI.

“We really want to be at the forefront of building Australia’s AI capability so that companies and government agencies can use artificial intelligence smartly, responsibly, and in accordance with our liberal democratic values. And that means respecting people’s basic human rights.”

Earlier this week, Perth lawyer Lorrainne Finlay was appointed as Australia’s next Human Rights Commissioner by the Coalition government. Ms Finlay has rarely broached technology issues in regards to rights and has been critical of the Australian Human Rights Commission in the past.

Crisis support is available from Lifeline on 13 11 14.

Do you know more? Contact James Riley via Email.