Express & Star

Experts ‘deeply concerned’ as Government agency drops focus on bias in AI

The AI Safety Institute is being renamed the AI Security Institute to reflect a greater focus on crime and national security issues.

By contributor Christopher McKeon, PA Political Correspondent
Published
A graphic of a robot hand touching a human hand
The Government has announced the newly rebranded AI Security Institute will no longer focus on ‘bias or freedom of speech’ concerns, concentrating instead on crime and national security (Alamy/PA)

Technology experts have expressed concern that the Government is “pivoting away from ‘safety’ towards ‘national security’” after it announced a rebranding of the AI Safety Institute.

Peter Kyle, the Technology Secretary, rechristened the agency on Friday as the AI Security Institute (AISI), saying it would refocus its work on crime and national security issues.

But while Mr Kyle insisted the AISI’s work “won’t change”, his department revealed it would no longer focus on “bias or freedom of speech”, sparking concern from experts in the field.

Michael Birtwistle, associate director at the Ada Lovelace Institute, said he was “deeply concerned that any attention to bias in AI applications has been explicitly cut out of the new AISI’s scope”.

He said: “A more pared back approach from the Government risks leaving a whole range of harms to people and society unaddressed – risks that it has previously committed to tackling through the work of the AI Safety Institute.

“It’s unclear if there’s still a plan to meaningfully address them, if not in AISI.”

Rishi Sunak delivers a speech at the AI Safety Summit in 2023.
Rishi Sunak launched the AI Safety Institute at the end of 2023, but less than two years later it is being renamed (Justin Tallis/PA)

Pointing to a series of scandals involving bias in AI in Australia, the Netherlands and the UK, Mr Birtwistle said there was a “real risk that inaction on risks like bias will lead to public opinion turning against AI”.

As well as the AISI’s new name, Mr Kyle announced the creation of a new “criminal misuse” team within the institute to tackle risks such as AI being used to create chemical weapons, carry out cyber attacks and enable crimes such as fraud and child sexual abuse.

Crime and security concerns already form part of the institute’s remit, but it currently also covers wider societal impacts of artificial intelligence, the risk of AI becoming autonomous and the effectiveness of safety measures for AI systems.

Established in 2023, then-prime minister Rishi Sunak said the institute would “advance the world’s knowledge of AI safety”, including exploring “all the risks from social harms like bias and misinformation, through to the most extreme risks of all”.

Mr Kyle said the AISI’s “renewed focus” on security would “ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values and way of life”.

But Andrew Dudfield, head of AI at fact checking organisation Full Fact, said the move was “another disappointing downgrade of ethical considerations in AI development that undermines the UK’s ability to lead the global conversation”.

Prime Minister Sir Keir Starmer gesticulates as he gives a speech at Google's London AI Campus.
Sir Keir Starmer has pledged to put AI at the heart of his Government, but the UK declined to join other nations in signing a major international agreement on the technology in Paris earlier this week (Stefan Rousseau/PA)

Describing security and transparency as “mutually reinforcing pillars essential to building public confidence in AI”, Mr Dudfield added: “If the Government pivots away from the issues of what data is used to train AI models, it risks outsourcing those critical decisions to the most powerful internet platforms rather than exploring them in the democratic light of day.”

Friday’s announcement comes after the Government began the year pledging to make the UK a world leader in AI and to put the technology at the heart of Whitehall.

But it also comes in the same week that the UK joined the US in refusing to sign an international agreement on AI at a summit in Paris.

The Government said it had declined to sign the communique issued at the end of the French-hosted AI Action Summit as it had not provided enough “practical clarity” on “global governance” of the technology or addressed “harder questions” about national safety.

Sorry, we are not accepting comments on this article.