Artificial Intelligence Biases Protected Characteristics 2023-12-13

2023-12-13

TAGS
Response quality

Questions & Answers

Q1 Partial Answer
Context
Discussions about the impact of AI on individuals with protected characteristics are ongoing, and there is concern that AI systems may discriminate unfairly.
What recent discussions has she had with Cabinet colleagues on the potential for biases in artificial intelligence technologies in relation to people with protected characteristics?
We are having cross-governmental discussions about AI, and we are very clear that AI systems should not undermine people's rights or discriminate unfairly. This was a key topic of discussion at the AI safety summit, and it remains a priority for the Government. Fairness is a core principle of our AI regulatory framework, and UK regulators are already taking action to address AI-related bias and discrimination.
Assessment & feedback
The answer does not provide specific details about recent government discussions on biases in AI affecting individuals with protected characteristics.
Response accuracy
Q2 Partial Answer
Context
The question arises from findings by the Institute for the Future of Work indicating significant risks to equality posed by AI, particularly in recruitment processes where compliance with UK Equality Law is often inadequate.
Is the Minister aware that the Institute for the Future of Work found that the use of artificial intelligence in recruitment presents risks to equality and auditing AI tools used in recruitment are often inadequate? What steps are being taken across Government to ensure appropriate assessments of equalities impacts when using AI in workplaces?
That is exactly why we had the AI safety summit, at which more than 28 countries plus the EU signed up to the Bletchley declaration. In March, we published the AI regulation White Paper, which set out our first steps towards establishing a regulatory framework for AI. I repeat that AI systems should not undermine people's rights or discriminate unfairly, and that is one of the core principles set out in the White Paper.
Assessment & feedback
The answer does not address the specific findings from the Institute for the Future of Work or provide concrete steps taken to assess equalities impacts when using AI in workplaces.
Response accuracy
Q3 Partial Answer
Context
Concerns exist about the potential liberalisation of AI use in decision-making processes under the UK Government's Data Protection and Digital Information Bill, which may reduce appeal rights for individuals facing decisions made by automated systems.
The risk of perpetuating inequality through sole reliance on automated decision making is well-recognised both in recruitment and for disabled people accessing employment, as well as in other contexts such as immigration and welfare benefits. However, the Data Protection and Digital Information Bill aims to liberalise AI use while reducing appeal rights. Does the Minister understand this risk? What specific plan does she have to mitigate risks like encoded bias?
I do not recognise the hon. Member's assessment, but let me say this: context matters. The risks of bias will vary depending on the specific way in which AI is used. That is why we are letting the regulators describe and illustrate what fairness means within their sectors, because they will be able to apply greater context to their discussions. The risk of discrimination should be assessed in context, and guidance should be issued that is specific to the sector. That is why we are preparing and publishing guidance to support the regulators. We will then encourage and support them to develop joint guidance. We will be working with the Equality and Human Rights Commission, the Information Commissioner's Office and the Employment Agency Standards Inspectorate.
Assessment & feedback
The answer does not provide a specific plan or concrete measures to mitigate risks of encoded bias in AI decision-making.
Response accuracy