What’s happening?
Everyone is talking about how to regulate AI. This May, by coincidence the month when GDPR turned 51, the members of the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law also convened a hearing to discuss “Oversight of A.I.: Rules for Artificial Intelligence.” On the 14th of June, the European Parliament passed a draft law on AI aimed at building a ‘Responsible AI’- which interestingly, was during the same week that Google was charged with violating European Union antitrust laws by abusing its dominant position in the advertising tech sector (it seems that their slogan ‘Do No Evil’ is not working out for them). Though the vote on the draft law only accomplished the approval of the European Parliament’s position on regulating AI, specific details from these efforts are not yet clear and will evolve over time.
Despite numerous discussions during the hearing, and various attempts to regulate AI, there are still no answers or indications on how to address this relatively new phenomenon. As usual with such sensitive subjects, there are strong points of view. During the hearing we heard from Sam Altman- the CEO of OpenAI- informed the senators that “OpenAI believes that regulation of A.I. is essential.”2 and he believes in the idea of creating a government agency focused on AI to license the technology. On the other hand, on the 12 of June Google, while submitting a comment in response to the National Telecommunications and Information Administration’s, challenged Sam Altman saying that they preferred a “multi-layered, multi-stakeholder approach to AI governance.” “At the national level, we support a hub-and-spoke approach — with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation — rather than a ‘Department of AI,’”. Microsoft seems to support Open AI’s point of view.
Further down the line the EU is set on a few key restrictions as follow3:
1.Ban on emotion-recognition AI
2.Ban on real-time biometrics and predictive policing in public spaces.
3.Ban on social scoring.
4.New restrictions for gen AI
5.New restrictions on recommendation algorithms on social media.
So what regulates AI if anything today and how can any regulation be achieved?
AI and Data Protection
Data privacy abuse is among the key concerns of utilising AI. The potential for data breaches as companies exercise their legal rights to utilise personal information is real and with the acceleration of AI there is an acceleration of new issues. In short, the extensive collection of data by AI systems increases the risk of unauthorised access, and Open AI’s Chat GPT is at the top of the list. The GDPR which everyone should comply with already requires the highest levels of accountability from organisations. Therefore, ensuring that the data processing meets all relevant requirements, should already be part of the picture when it comes to robust privacy frameworks. Record keeping demonstrating that companies implemented all necessary data protection measures is equally important. It seems that GDPR will be particularly relevant to the body of AI laws that we should come to expect.
We see this in the way that some have taken objection to the unleashing of ChatGPT. In March, Italy decided to temporarily ban ChatGPT from collecting personal data of data subjects in Italy. The ban has now been lifted after OpenAI addressed the regulatory concerns. According to the regulator, Garante per la Protezione dei Dati Personali, OpenAI did not have the legal right to utilize people’s personal information in Chat GPT. Italy’s Garante had the following concerns relating to Chat GPT which are GDPR-related.
Breach of the GDPR principal of Lawfulness, fairness and transparency: the lack of legal basis” for collecting people’s personal information within the massive amount of data used to train Chat GPT, and further lack of transparency and fairness as to how this data is processed by the end user. In more details:
- Breach of the GDPR principal of Accuracy: it can provide inaccurate information about individuals without possibility of rectification, and moreover
- OpenAI lacks age controls to prevent individuals under the age of 13 from using the text generation system.
Further breaches of the GDPR principles4 could also include:
- principle of Purpose Limitation: the objective of this principle cannot be met as there is no legal basis for processing identified, people have not been informed either that their data was processed nor for what purpose,
- principle of Data Minimisation: there is no mechanism that would allow OpenAI to categorise and minimise the data, and
- principal of Confidentiality and Integrity: whereby all data had been released to the world on non discriminative basis.
The shortcomings as listed above were damning. By relying heavily on vast quantities of data, AI faces the potential threat of encroaching upon individuals’ privacy. The absence of adequate safeguards and regulatory assurances has resulted in a rise of privacy apprehensions within organizations and it is very likely that AI companies and companies that would use AI would breach the GDPR. Therefore the main question for immediate consideration is how is addressing data privacy, if at all?
Is the AI industry addressing data privacy?
From our research5,6, we note that the AI industry is attempting to tackle data privacy through various approaches:
Some key aspects include:
- Privacy-preserving techniques: AI researchers and developers are actively working on techniques that preserve privacy, allowing data to be used for analysis and model training without exposing sensitive information.
- Data anonymization: Efforts are being made to anonymize data used in AI systems by removing personally identifiable information (PII) or employing techniques like k-anonymity (a property possessed by certain anonymized data. “Is often referred to as the power of “hiding in the crowd.” Individuals’ data is pooled in a larger group, meaning information in the group could correspond to any single member, thus masking the identity of the individual or individuals in question”)7 and generalization. This helps protect individuals’ identities while still enabling analysis and model development.
- Consent and control: There is a growing emphasis on obtaining informed consent from individuals for the use of their data in AI systems. Congressional discussions focus on ensuring that individuals have control over how their data is collected, stored, and used, including the right to opt-out or request the deletion of their data.
- Regulatory frameworks: Policymakers are actively discussing and developing regulatory frameworks to safeguard data privacy in the context of AI. These frameworks aim to establish guidelines and rules for organizations concerning AI’s responsible and ethical use, including data protection measures.
Conclusion
Americans innovate, the Europeans regulate?
The ongoing debate revolves around AI governance, with major big tech companies (Google, Microsoft, Open AI) holding varying opinions. Meanwhile, as AI continues to advance rapidly, concerns surrounding data privacy become increasingly significant. While the EU has a plan in progress, it remains merely a plan at present. It will likely be a considerable amount of time before we witness any enforcement measures being implemented and in the meantime AI companies and usage should be anchored in data privacy laws including the GDPR.
Leo’s GDPR software offers a suite of privacy tools allowing the creation and maintenance of robust privacy frameworks, including data impact assessment modules, and all the registers required under the GDPR. With an integrated reporting system and calendars for important deadlines it is the ideal software for data protection officers or compliance personnel.
Check here to see our GDPR solution.
1https://leo.tech/the-gdprs-5th-anniversarys-top-3-lessons/
2https://www.newyorker.com/news/daily-comment/congress-really-wants-to-regulate-ai-but-no-one-seems-to-know-how
3https://www.technologyreview.com/2023/06/19/1075063/five-big-takeaways-from-europes-ai-act
4https://gdpr-info.eu/art-5-gdpr/
5https://www.cookielawinfo.com/data-privacy-and-ai-governance/
6https://economictimes.indiatimes.com/news/how-to/ai-and-privacy-the-privacy-concerns-surrounding-ai-its-potential-impact-on-personal-data/articleshow/99738234.cms?from=mdr
7https://www.immuta.com/blog/k-anonymity-everything-you-need-to-know-2021-guide/