
ChatGPT ban in Italy: what’s happening?
Update (April 9): on April 5, 2023 a meeting between the Italian Garante and OpenAI took place via videoconference. On April 8, 2023 the Garante started to examine OpenAI proposals.
On March 31st, 2023, The Italian Data Protection Authority (Garante della Privacy) imposed an “immediate temporary limitation on the processing of Italian users' data by OpenAI, the US-based company developing and managing the platform” ChatGPT, while the agency is investigating OpenAI.
The Authority published a press release in English, while the full official measure is available in Italian only.
In response to this measure, OpenAI temporarily blocked its service on Italian territory.
Even subscribed users received an email announcing the block.
Is this an actual block?
If you don't have particular skills, yes: you simply can’t access ChatGPT from Italy.
However, several users with developer access reported that the Application Programming Interface (API) is still running.
Moreover, skilled users can still access and use the service through a Virtual Private Network (VPN), like Proton VPN or Outline: these tools simulate a connection from other countries and bypass the block.
What are the views of the ban in Italy?
The decision raised a broad discussion, with views both for and against it.
Someone is arguing that entirely blocking the service is an OpenAI choice, though the Data Protection Authority itself titled its statement “Il Garante blocca ChatGPT” (Italian for “The Authority blocks ChatGPT”). Moreover, it's challenging to imagine a “temporary limitation on the processing of Italian users’ data” without a complete temporary block.
The journalistic debate on the topic and the political one suffer dramatically from polarisation and cheering. On one side, we have journalists arguing that “the Italian government has banned Italian intelligence” (see Antonio Polito, il Corriere della Sera), which is, of course, an over-simplification that does not help at all in understanding events. We also have statements like “Block is a Taliban choice, so we lose money and jobs,” and various positions emerged against the decision of the Garante, from different points of view (some complaining about the possible economic loss, others about the perpetration of a technological backward situation in Italy, and so on).
On the other side, we have supporters of the decision celebrating it, most of all across the world of privacy experts and Data Protection Officers.
This divisive approach makes it challenging to construct sensible arguments. Of course, in the middle, you can find people trying to take positions that take account of the issue’s complexity, which is what we are trying to do here.
Luca De Biase, an Italian journalist dealing with technological innovation and social and economic perspectives of new media, wrote, for example, that we have to remember several things, among which:
“There is an oligopoly of mega A.I. The machines that make them go are gigantic. Not many can afford them. They are all American and Chinese. Europe has created a framework of laws that protect citizens. But it has not yet managed to start from this framework to give birth to technological alternatives that can compete with the American and Chinese ones”. [...]
“The design of machines with enormous social and cultural implications can no longer be limited to technological issues. The relationships between humans and machines must be designed so that we don’t always find ourselves in everyday situations where machines impose themselves, and humans must adapt. Nor should we risk holding back introducing new solutions just because we don’t understand them”.
Professor of Philosophy and Ethics of Information at the University of Oxford Luciano Floridi, very active on the subject on his Facebook profile, was interviewed by the Italian HuffPost – here is the complete English translation – and describes the measure as “draconian”.
But even if we choose to deal with the legal question only, we can find experts in law and intersections with technology with profoundly differing opinions.
Roberta Covelli (graduate in law and research fellow in labour law) wrote on Fanpage, an Italian digital newspaper, that “it is necessary to be aware of the risks of a fideistic use of new technologies, demanding the implementation of rights even when an algorithm decides: protecting personal data, guaranteeing information on their fate, is, in fact, a democratic act, as well as a form of intelligence.” According to her article and analysis, the ban protects the Italians’ rights.
Andrea Monti, lawyer, writer, and scholar in high tech law, on the other side, analyses the order and concludes that it “creates more problems than it solves. It raises serious political and economic criticisms because it calls into question the legitimacy of the entire U.S. ecosystem based on platforms and the data economy without Italy having a valid alternative for citizens and businesses. It reinforces the principle of individual irresponsibility and civic disengagement, suggesting that – to use the umpteenth digital gadget launched on the market – one can renounce claiming respect for one's rights because someone else will take care of it. It justifies the abandonment by adults of their role as educators and guides of the vulnerable subjects who depend on them and are entrusted to them by nature, even before the law”.
Reasons for the ban and a complex topic to explore
The Garante decision is based on a supposed lack of information provided to users by OpenAI, the absence of a proper consent request for users aiming to access ChatGPT, the absence of a legal basis for data collection and processing, and inadequate age verification measures for users under 13.
The agency wrote that “the processing of personal data of users, including minors, and of the data subjects whose data is used by the service, violates Articles 5, 6, 8, 13, and 25” of The General Data Protection Regulation (GDPR), the European privacy and security law. GPDR's article 5 defines the principles relating to the processing of personal data, article 6 the Lawfulness of processing data, article 8 is specific for children, and article 13 is about the information to be provided where personal data are collected.
But what kind of data are we talking about? Are we talking about something like navigation data, email, and credit cards, or data related to the use of the service? In this case, we have to remember that OpenAI exposed sensitive ChatGPT users’ data (namely, titles from the history of conversations, perhaps the first message in each chat, and active users’ first and last name, email address, payment address, the last four digits of a credit card number, and credit card expiration date). The data breach “affected 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window”, says OpenAI. It is perfectly normal that the Authority is investigating this data breach.
But in its order, the Garante also says that “it has been noted that the processing of personal data of the data subjects is inaccurate, as the information provided by ChatGPT does not always correspond to the actual data”: this is an apparent reference to the so-called hallucinations that chatGPT suffers from, like all LLM text-producing machines. When I used Bing's A.I.-assisted search about myself, I showed some examples of these hallucinations in this guide I wrote for TheFix.
So, if we argue that OpenAI wasn’t allowed to take data on several web pages to train ChatGPT about people, and if we say that OpenAI should avoid any mistake and giving the possibility to let the people correct these mistakes – GDPR clearly states that “every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’)” – we are questioning the very functioning of the LLMs. Floridi explained to HuffPost that the system is “inherently fallible because it is based on statistical analysis done on billions of data points.”
And so, if this is the Italian government’s point, its decision is basically a complete ban request for this technology.
Now, it’s perfectly fine to question LLMs, the generative machines, and any kind of past, present and future technologies – we don’t have to suffer for technology and progress – but it means working on the problem not only from the privacy perspective, but engaging experts from different fields: political, ethical, technological, scientific, sociological, economic, anthropological – even the concept of privacy is different according to cultures – and creating working approaches to tackle the problem as a complex one, with a holistic approach which is missing.
If Italian Garante's decision has at least one non-controversial consequence, this is the fact that we can use this event as a starting point to put on the table all the issues that need to be addressed.
Was OpenAI allowed to take Wikipedia’s content to train ChatGPT? Was Midjourney allowed to train its machine with millions of portraits, paintings, illustrations, and photos? Is it possible to ask these models to unlearn what they have learnt? Who is in charge of innovation? What will happen to human labour once these machines are used worldwide?
These are only some questions we need to pose, and this is just the beginning.
As regards the specifics of the Italian case, however, all we have to do is wait for the results of the Authority’s investigation and monitor how the other states of the European Union will move.
[subscribeform]