skip to Main Content

Thinking inside the box

0845 602 7006 | 0117 322 6163

How is Artificial Intelligence being regulated?

The International Records Management Society (IRMS) says there are four essential elements of any data-driven business: Information, People, Technology and Regulation. And it’s that final element, regulation, which seems to be proving particularly tricky when it comes to the issues surrounding management and rulings on Artificial Intelligence.

The BBC reports that the government has set out plans to regulate artificial intelligence with new guidelines on “responsible use”. Describing it as one of the “technologies of tomorrow”, the government said AI contributed £3.7bn ($5.6bn) to the UK economy last year.

Critics fear the rapid growth of AI could threaten jobs or be used for malicious purposes.

The term AI covers computer systems able to do tasks that would normally need human intelligence.

This includes chatbots able to understand questions and respond with human-like answers, and systems capable of recognising objects in pictures.

A new white paper from the Department for Science, Innovation and Technology proposes rules for general purpose AI, which are systems that can be used for different purposes.

Technologies include, for example, those which underpin chatbot ChatGPT.

As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety. There is concern that AI can display biases against particular groups if trained on large datasets scraped from the internet which can include racist, sexist and other undesirable material. AI could also be used to create and spread misinformation. As a result many experts say AI needs regulation.

However AI advocates say the tech is already delivering real social and economic benefits for people.

And the government fears organisations may be held back from using AI to its full potential because a patchwork of legal regimes could cause confusion for businesses trying to comply with rules.

Instead of giving responsibility for AI governance to a new single regulator, the government wants existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with their own approaches that suit the way AI is actually being used in their sectors. These regulators will be using existing laws rather than being given new powers.

The white paper outlines five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:

  • Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
  • Transparency and “explainability”: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
  • Fairness: AI should be used in a way which complies with the UK’s existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes
  • Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

Over the next year, regulators will issue practical guidance to organisations to set out how to implement these principles in their sectors. However there are concerns that the UK’s regulators could be burdened with “an increasingly large and diverse” range of complaints, when “rapidly developing and challenging” AI is added to their workloads.

China has already taken the lead in moving AI regulations past the proposal stage with rules that mandate companies notify users when an AI algorithm is playing a role.

In the EU, the European Commission has published proposals for regulations titled the Artificial Intelligence Act which would have a much broader scope than China’s enacted regulation.

They include “grading” AI products according to how potentially harmful they might be and staggering regulation accordingly. So for example an email spam filter would be more lightly regulated than something designed to diagnose a medical condition – and some AI uses, such as social grading by governments, would be prohibited altogether.

“AI has been around for decades but has reached new capacities fuelled by computing power,” Thierry Breton, the EU’s Commissioner for Internal Market, said in a statement.

The AI Act aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use,” Mr Breton added.

Meanwhile in the US The Algorithmic Accountability Act 2022 requires companies to assess the impacts of AI but the nation’s AI framework is so far voluntary.

When it comes to hard copy paper documents the records storage and management sector is well regulated. However legislative regulations concerning records management are complex and often misunderstood and the risks associated with non-compliance are far reaching. Ultimately the Information Commissioners Office can fine up to £500,000 for data breaches.

Filofile understands the challenges faced by business today and has developed a cost effective method that addresses the complete life cycle of a document. For example one of the most serious problems is destroying documents before or storing beyond the end of their legally required retention periods.  Part of Filofile’s full service includes end of life procedures for documents; once authorised by the client, data is carefully disposed of when the agreed time is reached. With a combination of technology and reliable human interfaces Filofile clients can be assured their documents are in safe hands. Give us a call on 0845 6027006 or message us here to see how we can help you with your records management and storage – you will always be talking to a real person and not a chatbot!

Back To Top