Researchers Develop Images Using AI To Decode Brain Activity

The initial draft lists of either prohibited or high-risk applications of the Artificial Intelligence (AI) Act are being revised by Parliamentarians of  Europe.

Researchers Develop Images Using AI To Decode Brain Activity

The range of artificial intelligence systems covered by a forthcoming regulation is about to be expanded by EU legislators. A transatlantic conflict is still avoidable. The initial draft lists of either prohibited or high-risk applications of the Artificial Intelligence (AI) Act are being revised by Parliamentarians of  Europe.

The main goal of the AI Act of Europe is to mandate that developers who work on high-risk applications document, test, and take other safety precautions. More applications than those initially suggested are likely to be included by Parliament, including broad categories like those “likely to influence democratic processes like elections” or “General Purpose AIs” that can be incorporated into various applications like OpenAI’s ChatGPT.

Expectable resistance from Big Tech. The US government expressed worry. An expansion of the scope might exacerbate existing transatlantic tensions over tech regulation. Washington has largely kept quiet while Europe cracks down on technology so far.

Washington and Brussels have emphasised cooperation and expressed a desire to harmonise their regulatory approaches to AI through the Trade and Technology Council. This cooperative attitude might be put to the test if the AI Act is expanded.

US companies and regulators were already worried about the AI Act, fearing it was trying to tackle too many applications and would become ineffective and burdensome. US critics should recognize that the Act is not the “one-size-fits-all” solution it is often imagined to be.

It attaches a single set of generic requirements to high-risk AI systems, but the requirements could and should be adapted to different applications. This would allow flexibility in enforcement and potential alignment with international standards. Recent AI Act proposals also offer opt-out mechanisms for companies who do not think their AI systems pose any risks.

The United States is becoming more pro-regulated. As the debate over AI has been sparked by ChatGPT’s success, Senate Majority Leader Chuck Schumer declared his intention to develop an AI policy framework. Schumer’s statement implies some standardisation of AI system requirements based on the four guardrails of “who,” “where,” “how,” and “protect,” though specifics are still lacking.

At the same time, the FTC has started issuing sharp warnings about AI systems, and the Department of Commerce has launched a request for comments on the creation of AI audits and assessments. In light of this, Washington and Brussels welcome the chance to collaborate.

Together, the US and EU can develop the technical requirements and other specific implementation details that will support both of their regulatory strategies. The EU should invite and promote this collaboration since it has an advantage over the US in some of these discussions. As they implement their own requirements, US lawmakers and federal agencies should, on the other hand, study the EU’s strategy.

A text still needs to be approved by the European Parliament, and the negotiations are taking longer. On April 27, EU lawmakers came to a political agreement, and a crucial committee vote is scheduled for May 11. Mid-June will see a vote in the entire parliament.

Even then, the EU Council, which is composed of the relevant ministers from member states, will need to be brought into agreement with the Parliament’s position. There are still major changes that could be made.

It makes sense that Americans are concerned about the EU’s potential expansion of the AI Act’s application. As the US’s domestic conversation about AI risks develops, it should present specific recommendations to the EU. A fruitful discussion about AI regulation is still both possible and essential.