The well-known artificial intelligence chatbot ChatGPT from American startup OpenAI has been outlawed for the first time in the Western world in Italy.

null

The well-known artificial intelligence – AI chatbot ChatGPT from American startup OpenAI has been outlawed for the first time in the Western world in Italy.
In the midst of an investigation into a possible violation of Europe’s stringent privacy laws, the Italian Data Protection Watchdog ordered OpenAI last week to temporarily stop processing the data of Italian users.

The regulator, also known as Garante, gave the example of an OpenAI data breach that allowed users to see the titles of conversations that other users were having with the chatbot.

Garante said in a statement on Friday that there “appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”

Garante raised concerns about ChatGPT‘s lack of age restrictions as well as the chatbot’s potential to provide factually incorrect information in its responses. If OpenAI, which is supported by Microsoft, doesn’t come up with solutions to the issue in 20 days, it could be subject to a fine of 20 million euros ($21.8 million), or 4% of its global annual revenue.

The rapid pace of AI progression and its implications for society are causing governments to come up with their own rules for AI. Generative AI is a set of AI technologies that generate new content based on prompts from users.

It is more advanced than previous iterations of AI, thanks to new large language models. There have been calls for AI to face regulation, but the pace at which the technology has progressed is such that it is difficult for governments to keep up. Computers can now create realistic art, write entire essays, or even generate lines of code in a matter of seconds.

Sophie Hackford, a futurist and global technology innovation advisor for John Deere, warned on CNBC’s “Squawk Box Europe” that we need to be careful not to create a world where humans are subservient to a greater machine future.

“Technology is here to serve us, to make our cancer diagnosis quicker or make humans not have to do jobs that we don’t want to do,” she said. “We need to be thinking about it carefully now, and acting on it from a regulatory perspective.”

Regulators are concerned about the challenges AI poses for job security, data privacy, and equality. Governments are considering banning general purpose systems such as ChatGPT, and the U.K. announced plans to regulate AI by applying existing regulations.

Britain is proposing key principles for companies to use AI in their products, such as safety, transparency, fairness, accountability, and contestability. It is not proposing restrictions on ChatGPT, but instead wants to ensure companies are developing and using AI responsibly and giving users enough information about how and why decisions are taken.

Digital Minister Michelle Donelan highlighted the risks and opportunities of generative AI in a speech to Parliament last Wednesday. By taking a non-statutory approach, the government will be able to respond quickly to advances in AI and intervene further if necessary.

Dan Holmes, fraud prevention leader at Feedzai, said the main priority of the U.K.’s approach was addressing “what good AI usage looks like.”

The European Union has proposed a groundbreaking piece of legislation on AI, known as the European AI Act, which will restrict the use of AI in critical infrastructure, education, law enforcement, and the judicial system. It is expected to take a far more restrictive stance on AI than its British counterparts, which have been diverging from EU digital laws following the U.K.’s withdrawal from the bloc.

The EU’s General Data Protection Regulation (GDPR) will regulate how companies can process and store personal data. The draft rules consider artificial intelligence chatbot, ChatGPT to be a form of general purpose AI used in high-risk applications. High-risk AI systems are defined by the commission as those that could affect people’s fundamental rights or safety.

They would have to deal with measures like strict risk assessments and the need to eliminate discrimination brought on by the datasets feeding the algorithms.

“The EU has a wealth of deep-pocketed AI expertise. They have access to some of the best talent in the world, and they have had this discussion before, “Darktrace’s Max Heinemeyer, the company’s chief product officer, told CNBC.

EU countries are looking at Italy’s actions on artificial intelligence chatbot, ChatGPT and debating whether to follow suit. Germany’s Federal Commissioner for Data Protection, Ulrich Kelber, believes a similar procedure is possible in Germany.

According to Reuters, the privacy regulators in France, Ireland, and the UK have reached out to their Italian counterparts to learn more about the report’s findings. A ban was disallowed by Sweden’s data protection authority.

Italy can take such action because OpenAI doesn’t have even one office in the EU. The majority of American tech giants, including Meta and Google, have offices in Ireland, making it the country with the most active regulatory environment when it comes to data privacy.

The U.S. hasn’t yet proposed any formal rules to regulate AI technology, but the National Institute of Science and Technology has put out a national framework that gives companies guidance on managing risks and potential harms.

Last month, the Federal Trade Commission received a complaint from a nonprofit research group alleging GPT-4, OpenAI’s latest large language model, is “biased, deceptive, and a risk to privacy and public safety” and violates the agency’s AI guidelines. So far, no action has been taken to limit ChatGPT.

The complaint might result in an investigation into OpenAI and a halt to the company’s large commercial language model deployment.

The FTC chose not to respond. China and other nations with strict internet censorship, including North Korea, Iran, and Russia, do not offer artificial intelligence chatbot, ChatGPT. Although it isn’t formally prohibited, users in the country cannot sign up for OpenAI.

Alternatives are being developed by several sizable tech companies in China. Some of China’s largest tech companies, including Baidu, Alibaba, and JD.com, have revealed plans to become competitors to ChatGPT.

China recently introduced a first-of-its-kind regulation on so-called deep fakes, synthetically generated or altered images, videos, or text made using AI.

It also introduced rules governing the way companies operate recommendation algorithms, requiring companies to file details of their algorithms with the cyberspace regulator. These regulations could apply to any kind of ChatGPT-style technology.