The AI Sliding Scale – A Tool or a Threat?

Jerome

Lussan

AI regulation feels unclear to say the least. We seek to delve into what has been done and what is intended to be done so far to prepare for the forthcoming compliance duties that will affect many of us. AI has seemingly endless potential (both negative and positive), with no current way of fully understanding it. As mentioned in Leo’s June newsletter issue: ‘GDPR and AI crossroads’1, the European Parliament’s ‘draft law on AI aim[s] at building a ‘Responsible AI’’. The question presents itself: What exactly is a ‘Responsible AI’ and how is it expected to appear in the world of compliance?

We have analysed the EU, US and UK approaches towards building this “Responsible AI”. Are there guidelines, standards, and best practices ensuring that AI technologies are designed and used in a manner that is fair, transparent, and accountable?

The EU seems to view AI almost exclusively as a danger until it is sufficiently regulated, whilst the US prefers to view it as an opportunity. On the other hand, the UK seems able to acknowledge the dangers but with no action to regulate it actively, they are much further behind. We will now explore these varied approaches.

EU vs US in approach to regulating AI

So far, the EU has been highly vocal about regulating AI, but this stance has not been adopted by the US.

The US is ‘the top country in the number of AI talents’2, which would lead us to expect that they would also be leading the discussions of AI regulation. However, as stated by Bloomberg’s Courtney Rozen, ‘the US government doesn’t have a great track record of keeping up with emerging technology’.

In the US, AI Regulation is described as in its ‘early days’3, which arguably doesn’t quite match up with the first US AI programme running as early as 1952.4 It is, however, only recently that AI has become more mainstream through tools such as Chat GPT, which may have spurred on the US’ most recent development in AI regulation: the June 2023 AI Disclosure Act stating that all AI generated material is required to include “DISCLAIMER: this output has been generated by artificial intelligence”.5 Previously, the US’ only development was the publication of a non-binding blueprint for an AI Bill of Rights.

Contrast this with the EU’s approach to regulating AI which feels detailed and hyper-conscious, with specific attention to GDPR. Although it has yet to officially be passed, having been discussed and considered in depth the AI Act is expected to be brought into law within the next year.

The AI Act would include a tiered system of regulatory obligations, including the following:6

  • AI applications must clearly disclose themselves to affected persons
  • Systems deemed as ‘unacceptable risk’ would be banned completely
  • ‘High risk’ systems (between ‘low risk’ and ‘unacceptable risk’) will have to meet clear and demanding standards of transparency and monitoring

This AI Act in addition to the EU’s existing Digital Services Act and Digital Markets Act, creates a clear focus on safety and GDPR requirements in the EU’s approach to AI regulation.

As a reminder to our readers note that the Digital Service Act states that “…it will give people more control over what they see online: users will have better information over why specific content is recommended to them and will be able to choose an option that does not include profiling.”7 And the Digital Markets Act  purpose “…is to ensure a level playing field for all digital companies, regardless of their size. The regulation will lay down clear rules for big platforms – a list of “dos” and “don’ts” – which aim to stop them from imposing unfair conditions on businesses and consumers.”8

Through observing these vastly different approaches to regulating AI from the US and EU, we can conclude that the US’ approach will no doubt open up many opportunities. It will also, as is often the case in ‘liberal’ regimes, include many abuses. This reminds us of the financial scene 10 years back that helped scandals such as Madoff. The EU should offer a safer ground, and feels like the right approach, however in this age of international internet flexibility, it is likely that the regulations in the EU will lead to bad actors, and others to establish themselves in the US yet still being able to reach into the EU customer base.

UK: A middleman approach to regulating AI

If we consider the US and EU as opposite approaches, it may be accurate to describe the UK as a middleman. The FCA’s Chief Data, Information and Intelligence Officer Jessica Rusu demonstrated a focused perspective at the 2023 Financial Global AI Regulation Summit, describing the FCA’s discussions of adoption of AI into financial markets as intending to ‘ensur[e] we win the financial coin toss’.9 This further supports the idea that integration of AI into the financial sector still has a very unpredictable future, but that we have the capacity to direct it towards a positive route. In this same speech, Rusu acknowledges the UK’s specific need to use AI responsibly considering how many UK firms operate internationally. As a result, UK compliance must be prepared for the potential lack of regulation of AI from locations such as the US. 

As Global Technology Editor of The Guardian, Dan Milmo has juxtaposed the phrase ‘MPs have said […] the UK should introduce new legislation to control [AI] or risk falling behind the EU and US’ with ‘if the government does not take action now, legislation may not be enacted until late 2025’.10 This contrast perfectly summarises the UK’s approach: acknowledgement of the potential threat from AI yet not showing any urgent need to take regulatory action.

Whilst the UK’s conceptual perception of AI (as both tool and threat) is closer to the EU’s, the UK as we have seen is even further behind in the development of AI regulation than the US. 

Conclusion

To conclude, in the world of compliance “Responsible AI” is expected to play a significant role in ensuring that AI technologies and applications adhere to relevant laws and regulations. This includes data protection laws (such as GDPR in Europe), anti-discrimination laws, and industry-specific regulations. Compliance with ‘Responsible AI’ principles is becoming increasingly important for organisations, as non-compliance can result in legal consequences, reputational damage, and financial penalties.

As previously mentioned, the UK and FCA’s approach to regulating AI in terms of recognising potential danger is very much a middleman between the US and EU. However, the presence of AI regulation discussions is not nearly as significant as in either the US or EU. The UK clearly has an awareness of the threat the US mostly chooses to ignore, but we are drastically behind in terms of any actual action-plan being put in place.


Leo and AI

Regulatory technology like Leo is still new to the market (in terms of exposure), which makes its interaction with AI technology an interesting one to navigate. In almost every circumstance, Artificial Intelligence technology has been portrayed as an unpredictable sliding scale; on one end it has undeniable potential whilst on the other it is unquestionably threatening.

The aim of Leo is to create an efficient platform to make compliance easier and simpler which is why we have taken a significant step forward by introducing a conversational artificial intelligence (AI) feature into the software. This innovative addition enhances user experience by providing information and responses to any message you type into it. In keeping with the only regulation enforced in the UK thus far, Leo clearly informs users that information received in this way has been AI generated; transparency is a key element of regulation that Leo continues to execute to the highest degree.

Additionally, Leo itself represents a resource the UK needs more than it appears to realise; regulatory technology is likely to be a large part of the solution to the AI problem and therefore act as a filter allowing positive use of AI to be output whilst simultaneously preventing any threats from going unnoticed or ignored.

Leo and GDPR

It will likely be a considerable amount of time before we witness any enforcement measures being implemented in regulating AI, and in the meantime AI companies and usage should be anchored in data privacy laws including the GDPR.

Leo’s GDPR software offers a suite of privacy tools allowing the creation and maintenance of robust privacy frameworks, including data impact assessment modules, and all the registers required under the GDPR. With an integrated reporting system and calendars for important deadlines it is the ideal software for data protection officers or compliance personnel.

To learn more about our GDPR solution click below.

See Leo for GDPR


1 https://leo.tech/gdpr-and-ai-crossroads-how-to-balance-data-privacy-and-ai-governance/

2 https://intersog.com/blog/ai-dominant-players-and-aspiring-challengers/#:~:text=AI%20Development%20in%20the%20USA%20Is%20at%20the%20Highest%20Level,-The%20group%20of&text=The%20score%20of%20other%20countries,Korea%2C%20Australia%2C%20and%20Switzerland.

3 https://www.nytimes.com/2023/07/21/technology/ai-united-states-regulation.html

4 https://www.britannica.com/technology/artificial-intelligence/Alan-Turing-and-the-beginning-of-AI

5 https://ritchietorres.house.gov/posts/u-s-rep-ritchie-torres-introduces-federal-legislation-requiring-mandatory-disclaimer-for-material-generated-by-artificial-intelligence

6 https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/#anchor4

7https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

8https://www.europarl.europa.eu/news/en/headlines/society/20211209STO19124/eu-digital-markets-act-and-digital-services-act-explained

9 https://www.fca.org.uk/news/speeches/ai-flipping-coin-financial-services#:~:text=The%20role%20of%20regulation%20and,of%20AI%20in%20financial%20services.

10https://www.theguardian.com/technology/2023/aug/31/britain-must-become-a-leader-in-ai-regulation-say-mps

The FCA Tightens the Grip: Stricter Crypto Regulations and Enforcement Actions

Read more
Transforming AI: Labour’s Vision for a New Era

Read more
EU 28/03/24
Navigating EMIR Refit 2024: changes, challenges, and an upcoming deadline

Read more
EU 26/10/23
The AI Sliding Scale – A Tool or a Threat?

AI regulation feels unclear to say the least. We seek to delve into what has been done and what is intended to be done...

Read more
EU 31/08/23
GDPR Accountability: avoid fines, adherence is easier than you think – part 1

While updating Leo’s privacy and GDPR governance modules in our RegTech Software we realised that one of the most important principals of GDPR-...

Read more