EU publishes draft regulations to regulate AI applications, violating companies face huge fines

Vestager says EU taking lead in developing global norms to ensure AI is credible

The European Commission on Wednesday published strict draft regulations governing the use of artificial intelligence (AI), including banning most surveillance practices, exploitation of children or vulnerable groups, the deployment of subliminal messages or the establishment of a social credit system, in an attempt to establish international standards for key AI technologies. Companies that violate the rules will face a maximum of 6% of global turnover or a hefty fine of 30 million euros (about HK$280 million), whichever is higher.

The draft will restrict the use of AI in activities such as self-driving cars, hiring decisions, bank loans, school admissions selection and test scoring; law enforcement and the judicial system, “high-risk” areas where people’s safety or fundamental rights are at stake, will also be covered. Some uses of AI would be completely banned, including real-time facial recognition in public places, but some exemptions may be made for purposes such as national security.

The draft would require companies applying AI in high-risk areas to provide regulators with proof of safety, including risk assessments and documentation explaining how the AI makes decisions, and companies would have to ensure that systems are created and used with human supervision. Providers of certain applications, such as chat robots on customer service and software that makes it difficult to tell if an image has been modified, must make it clear to customers that they are dealing with a computer-generated product.

Foreign media described the 108-page draft as an attempt to regulate AI, an emerging technology, before it becomes mainstream. The rules have far-reaching implications not only for Amazon, Google, Facebook and Microsoft, major technology companies that have invested significant resources in developing AI, but also for many other companies that use AI to develop drugs, underwrite insurance and judge creditworthiness. The draft could also affect governments’ use of AI in criminal justice, distribution of public services and more.

Margrethe Vestager, the European Commission’s executive vice president for digital policy, said, “Trust is necessary, not optional, when it comes to AI. With these landmark provisions, the EU will lead the way in developing new global norms to ensure that AI is trustworthy.”