"AI Act must not enable mass surveillance overregulation!", Svenja Hahn (FDP) on the AI Act  

December 5, 2023

"With 22 open aspects for Wednesday's trilogue, it is hard to imagine that good results can be achieved on all of them. There cannot be a deal at all costs! It is better to take more time for negotiations to find balanced solutions than to rush to decisions on the future of AI in Europe. In addition to many smaller open aspects, Parliament and Council are still far apart on two main issues: the regulation of foundation models and general-purpose AI, as well on the use of remote biometric identification in public spaces. The AI Act must not enable mass surveillance or overregulation.”

 

Ahead of the final trilogue negotiations on Wednesday 6 December on the EU Artificial Intelligence Act (AI Act), Svenja Hahn (FDP), negotiator for the liberal Renew Europe group in the Internal Market Committee, warns against a deal at all costs between the EU member states and the Parliament:

 

"With 22 open aspects for Wednesday's trilogue, it is hard to imagine that good results can be achieved on all of them. There cannot be a deal at all costs! It is better to take more time for negotiations to find balanced solutions than to rush to decisions on the future of AI in Europe. In addition to many smaller open aspects, Parliament and Council are still far apart on two main issues: the regulation of foundation models and general-purpose AI, as well on the use of remote biometric identification in public spaces. The AI Act must not enable mass surveillance or overregulation.” 

 

Hahn explains the debate on the regulation of basic models and general-purpose AI:

 

"The goal must be lean rules, especially for small and medium-sized companies, which are already affected by excessive bureaucracy. If there is still no majority in favour of self-regulation of foundation models, the regulation should focus on the top models. However, the current Council proposal suggests comprehensive and technical hardly feasible requirements for both small and large players. There is a real threat of massive over-regulation of foundation models and general-purpose AI, regardless of their systemic impact. This would seriously jeopardise AI innovation made in Europe."  

 

Hahn insists on responsibility along the value chain:

 

"Clear and implementable rules for the highly impactful foundation models such as GPT, would be important to society as a whole, as rules can help to control serious risks and strengthen smaller companies in particular. Compliance costs must not be passed on to small and medium-sized companies that use such models or general-purpose systems or incorporate them into their own products. Those who build on foundation models must know that those models meet quality and safety standards in order to create safe products. This is in the interests of companies and consumers."  

 

Hahn criticises the proposals on copyright:

 

"Upholding copyright and intellectual property rights is undeniably important, but the more suitable place for this would be the separate copyright law, the Commission should modernise this as quickly as possible. Writing half-baked rules into a product law such as the AI Act without an impact assessment does not do justice to either rights holders nor digital companies. It is completely unclear, for example, how watermarking of texts should work technically, and the proposed detailed publication of training data even threatens to disclose business secrets."  

 

Hahn is particularly concerned about civil rights when it comes to biometric identification in public spaces:  

 

"I will continue to fight for the ban on real-time biometric identification in public spaces, as Parliament has called for. Retrograde biometric identification for law enforcement must meet highest rule of law standards. Unfortunately, all member states except Germany want to use this technology as unrestricted as possible.“