[news commentary] Europe’s Data Supervisory Authorities Investigating ChatGPTs Processing

Context: As of June 2023, Italy, Germany, France, Spain are among the countries that have begun to investigate the use of ChatGPT in Europe. The European Data Protection Board [EDPB] has even established a dedicated task force to help coordinate investigations. If these agencies demand changes to ChatGPT, as the Italian authority already has, it could affect how the service runs for users around the globe. Regulators’ concerns can be broadly split into two categories: where ChatGPT’s training data comes from and how OpenAI is delivering information to its users.

Why are we talking about this?

When discussing artificial intelligence, it is hard not to talk about the General Data Protection Regulation [GDPR] at the same time. The GDPR has exerted the greatest influence worldwide in shaping a more controlled data market, which happens to be a crucial element for AI advancements. 

With companies already engaging with ChatGPT, either via employee use or via implementation of the OpenAI API to power their chatbots and support tools, should we expect OpenAI to be facing a “…world of regulatory pain?” How will this affect the employees that are saving valuable time already using the tool? There is news every few weeks of a Data Supervisory Authority [DSA] opening an investigation into ChatGPT, with the charge being spearheaded by the Italian authority. 

Italy temporarily banned the tool several months ago, and requested OpenAI to implement some basic changes regarding the collection of data and processing of children’s data. On April 28th, ChatGPT came back to the country, and OpenAI dealt with the concerns of the Italian DSA without making significant alterations to its service, which seemed like a win…but will this continue? Is AI fundamentally at odds with the GDPR? What can you do to remedy your organization’s use of the tool?

Is AI at odds with the GDPR?

There are several aspects of AI usage that raise red flags for me, which I will detail in summary below:

Note this is all AI, not just ChatGPT

  1. Profiling and automated decision making
    • Article 22 provides some provisions here restricting this form of decision-making, but the intent of the GDPR here is not always very clear. It is a technically complex issue, and without getting into it too much we need to be aware that the restriction only applies when a decision is based solely on automated processing which produces legal effects or similarly significantly affects the data subject. The wording itself sets a high precedent, which is only being extended and broadened with cases such as SCHUFA (C-634/21). 
  2. Meaningful information on processing
    • Article 22, read in conjunction with Arts. 12-14 & 15 GDPR, states that individuals have a right to a clear and simple explanation of the logic involved in generating their decision. This is supported by the fact that all GDPR provisions apply to AI processing – especially those regarding fair and transparent processing. This can be quite simple to apply to an automated processing for the purpose of accepting or rejecting a loan application, but how can OpenAI feasibly share the logic with me when I request it? Will they share their training data sets? The answer is most, including OpenAI, will not.
  3. Difficulties with Data Subject Rights
    • EU users have a breadth of rights available to them when it comes to the processing of their data by third parties. Users can revoke consent, request access, modify, or delete their data, and more. Regardless of the AI tool, there has been some difficulty in exercising these rights for users. This will have to be locked down for EU DSAs to be happy with the processing.
  4. Misuse of the tool
    • There have already been several stories of employees flagrantly entering highly confidential personal or company information into ChatGPT in order for it to generate a report or complete a task for them quickly. ChatGPT does not provide much information on how they process this data and if it will be available as an ‘answer’ in future LLM model versions. What if my company’s financials are trotted out as an example answer to a question from a university Finance student? Not great.
  5. International Transfer woes
    • I won’t go into it too much here, but if you are engaged in the privacy world you will be aware of the headache that can be international transfer – especially when there is not a lot of transparency from the importing company. OpenAI does offer a Data Processing Agreement [DPA] for API users, but that is the extent of their efforts thus far.

There are several other tangential issues, like the fact that the EU will soon release an AI Regulation which could further throw OpenAI into jeopardy.

Will ChatGPT be banned in other countries?

It is unclear whether the DSAs of Europe will ban ChatGPT in the future. As mentioned above, there are many investigations going on simultaneously – including an EDPB taskforce to streamline them all – which is likely to result in at least a list of suggested changes to the service for EU users. I cannot personally see ChatGPT leaving Europe as it is such a large training data set for its model, but we will have to wait and see what the DSAs themselves say.

Can the use of ChatGPT become compliant in my company?

Note: This is not legal advice. Please refer to your Pridatect by Borneo account manager or internal legal team if you wish to implement ChatGPT in your company.

Unfortunately, there are no clear routes to compliance and so the best practice, for now, would be to ban employee use of ChatGPT within the organisation. The next best thing would be to at minimum:

  • Create an AI Use Policy or ChatGPT Acceptable Use Policy governing how the tool is used within your organisation. It could include:
    • Ensure employees do not input the personal data of clients, employees, etc.
    • Ensure employees do not sign up with email addresses and names that could identify them.
    • Do not overly rely on ChatGPT, and always double-check the outputs.
    • Engage ‘private mode’ in ChatGPT.
  • Conduct an impact assessment, evaluating the potential risks and measures you can implement to mitigate those risks – and implement these before engaging with ChatGPT.
  • Enter into a DPA [with SCCs & DTIA] with OpenAI to govern the processing.
    • You may require supplemental measures such as employee consent for the international transfer.
  • Adjust your employee privacy notice to cover the new processing – as well as adding the activity to your RoPA.

I, personally, would avoid the use of ChatGPT within your organisation until there is further guidance available as there is currently a sanction and liability risk regarding the data processing and data subject access requests relating to ChatGPT processing. Implementation and use of the OpenAI API is an even more complex topic. Please refer to your lawyer or DPO with these questions. For more information, please refer to our Q&A answer post regarding ChatGPT which provides our thought process in much more detail as a response to a real-life client question we have recently received. 

An interesting topic: though there are significant difficulties to overcome, AI has arrived and will be sticking around. We need to ensure proper and compliant data handling in the future – for which the AI Regulation will hopefully be helpful in providing actionable and practical guidance [including in regard to ethical considerations!]. If you have any questions about your company’s compliance, please contact: [email protected]

Share this article

Share

Is it ok for our employees to use ChatGPT to do their work? Can we use the OpenAI API for a chatbot? It depends. We are in a period of rapid development in the AI world with new EU regulation around the corner and the safest option would be to wait until further guidance is provided by your country’s supervisory authority. However, there are actions that can be taken at a minimum.

Created by:

Picture of Charles Maddy-Trevitt

Charles Maddy-Trevitt

UK Market GDPR Specialist at Borneo.
Charles has a background in a wide range of industries and sectors with international experience (US/UK/Canada/EU) in data protection, it’s this knowledge & experience that allows Charles to guide clients through the minefield of data protection regulations, and make compliance simple.

Related articles

Search

Newsletter

Subscribe to our legal newsletter and you will be the first to receive our new blog articles, webinar information, ebooks, and more.

Free Webinars