With a $240 million investment from the trade bloc, the EU has opened four permanent testing facilities across the continent.

For artificial intelligence systems, the European Union is implementing Crash Test to ensure safety prior to market entry. With a $240 million investment from the trade bloc, the EU has opened four permanent testing facilities across the continent. Prior to their release on the market, this project aims to ensure the security of AI-powered innovations.

Technology providers will have a place to test AI and robotics in a variety of fields, including manufacturing, health care, agriculture and food, and cities, thanks to the crash test systems and facilities that will be introduced starting in the following year. The only sensible course of action for the trade block is testing the technology, given how quickly it develops every day.

During a launch event in Copenhagen, EU Director for Artificial Intelligence and Digital Industry Lucilla Sioli said that innovators should release new AI-powered tools on the market under the guise of “reliable” goods. Sioli also called out misinformation as one of the dangers that AI poses to people.

Consumer organisations from all over Europe urged regulators to start looking into the potential dangers of generative AI, like ChatGPT, last week. Consumer advocacy groups hope that by making this effort, existing laws will be enforced to protect consumers.

Ursula Pachl, deputy director general of the BEUC, highlighted the danger the system might pose by expressing worries about the technology’s potential for manipulation, deception, harm, and disinformation.

Instead of waiting for consumer harm to occur, Pachl urges safety, data, and consumer protection authorities to look into AI-powered goods and services right away. The AI Act has been in the works for two years and was started by the head of the European Union.

According to the act, AI systems are divided into four risk categories: unacceptable, high, limited, and minimal or no risk. To safeguard consumers and guarantee the security of goods and services powered by AI, authorities must enact and enforce these laws.

Technology that encourages manipulating people, encouraging risky behaviour in children, social scoring, policing systems based on profiling, location, or past criminal behaviour, and identification via biometric systems will eventually be outlawed once the trade block deems it to be an unacceptable risk.

The EU aims to finalize the bill by the end of this year, following MEPs’ June votes on an amended version. Trilateral talks between the EU, the EU Parliament’s AI Committee Chairs, and the European Union Council are currently unavailable.