February 12, 2020
The public sector must uphold high standards of conduct when adopting AI, a report from the Committee on Standards in Public Life has concluded. The committee does not believe a new AI regulator or major legal change are necessary but it does have concerns about lack of openness, lack of accountability and data bias.
‘Any change in how public services are delivered must not undermine the values that underpin public service, and it’s clear that the public need reassurance about the way AI will be used’, Lord Jonathan Evans, the Committee’s Chair, wrote in a blog about the new report. ‘On openness, government is failing. Public sector organisations are not sufficiently transparent about their use of AI and it is extremely difficult to find out where and how new technology is being used. We add our voice to that of the Law Society and the Bureau of Investigative Journalism in calling for better disclosure of algorithmic systems.’
Explaining AI decisions will be key to accountability, Lord Evans said. ‘Explainable AI is a realistic and attainable goal for the public sector – so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.’
On data bias, ‘our committee found cause for serious concern’ he warned. ‘Public sector organisations must be aware of how their software solutions affect minority communities – and they should act to minimise any discriminatory impact.’
Policy and regulation
The UK’s regulatory and governance framework for AI in the public sector ‘remains a work in progress and deficiencies are notable’, the committee found. It welcomed recent public sector AI guidance as a step in the right direction and said the GDPR and Equality Act 2010 provide strong legal safeguards. However, it identified ‘an urgent need for practical guidance and enforceable regulation’ to address lack of transparency and data bias.
‘Transparency – and therefore accountability – over the way in which public money is spent remains a very grey area in the UK.’
The committee also endorsed the government’s plan to turn the Centre for Data Ethics and Innovation, which is part of the Department for Digital, Culture, Media & Sport, into an independent statutory body to advise government and existing regulators, rather than creating a new AI regulator.
UK citizens ‘baffled’
Last year, the Bureau of Investigative Journalism issued a report looking at the government’s investment in AI and where UK taxpayers’ money is going. It highlighted growing concerns around the data being generated about citizens and the ability of legal frameworks to keep pace with technological advances.
‘People are convinced that the growth of technology in the public sector has hugely important ramifications, but are baffled as to what exactly is going on and who is doing it’, the authors wrote. ‘Transparency – and therefore accountability – over the way in which public money is spent remains a very grey area in the UK.’
Asheesh Mehra, the CEO and co-founder of AntWorks, an AI company, said another area of concern is AI’s impact on the environment. There are benefits, because ‘AI can make make better climate predictions, show the effects of extreme weather and identify the source of carbon emissions. The problem is that AI engines require giant datasets, which means huge amounts of computing infrastructure that consume enormous amounts of power. This can have significant carbon footprints.’
The UK government has a significant responsibility to take a lead on creating and implementing rules on AI use, he said. ‘Regulations should indicate that applying AI is appropriate for particular purposes in specific industries, while other laws or rules should make clear what applications of AI are not allowed.’
Image by Arek Socha