New centre will assess potential for bias in algorithmic decision-making

The potential for bias in the use of algorithms in crime and justice, financial services, recruitment and local government will be investigated by the UK government’s new Centre for Data Ethics and Innovation (CDEI). The CDEI will explore the potential for bias in existing systems and to support fairer decision-making. This may include increasing opportunities for those in the job market in existing recruitment and financial services systems. It will also explore opportunities to boost innovation in the digital economy.

According to the government, algorithms have huge potential for preventing crime, protecting the public and improving the way services are delivered. But decisions made in these areas are likely to have a significant impact on people’s lives and public trust is essential.

Professionals in these fields are increasingly using algorithms built from data to help them make decisions. But there is a risk that any human bias in that data will be reflected in recommendations made by the algorithm. The CDEI wants to ensure those using such technology can understand the potential for bias and have measures in place to address. It also aims to help guarantee fairer decisions and where possible improve processes.

 

Harnessing the benefits of AI

The establishment of the CDEI is planned to support the government’s wider Industrial Strategy, and it was set up to make sure data-driven technologies and artificial intelligence are used for the benefit of society. It will partner with the Race Disparity Unit to explore the potential for bias based on ethnicity in decisions made in the crime and justice system.

[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]AI could potentially limit the impact of unconscious bias, where people discriminate against candidates because of their background[/perfectpullquote]

In recruitment, computer algorithms can be used to screen CVs and shortlist candidates. This could help potentially limit the impact of unconscious bias, where people discriminate against candidates because of their background. But there have also been reports of such technology inadvertently exacerbating gender bias.

The CDEI has set out its priorities in its first Work Programme and strategy. This also includes plans for it to investigate how data is used to shape online experiences through personalisation and micro-targeting – for example where you search for a product and then adverts for similar products appear later in your browser.

Commenting on the launch of the CDEI, Fernando Lucini, head of AI, at Accenture UK said: “It’s important that government is looking to regulate around responsible AI, but businesses shouldn’t wait to make sure their AI is fair, ethical and free of bias. AI has tremendous opportunity to do good – but unintended consequences of deploying it could be biased outcomes. We saw Amazon, for example, have to scrap its AI-based recruiting tool when it appeared to show bias against women.

[perfectpullquote align=”right” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]An AI’s ethics are only as good as the rules they’re built on[/perfectpullquote]

“An AI’s ethics are only as good as the rules they’re built on. Organisations should anchor their AI to their core values and mission, introduce a clear ethical framework, and put training in place to equip developers to design out bias. Then AI leaders should review the outputs of their AI weekly and have processes in place to override questionable decisions. Beyond that, being open with consumers about where AIs are used and how they make decisions is the best way to build trust.”