With the rapid development of AI technologies such as ChatGPT, human society has to both value the benefits of new technologies and regulate the new risks that AI may bring. The Bio and Science Ethics Working Committee of the China Foundation for Biodiversity Conservation and Green Development (Green Council BASE) has learned that on May 11, 2023, the European Parliament has voted on the Artificial Intelligence Act (AI Act) and has agreed on some regulatory measures for artificial intelligence (AI) technologies such as banning facial recognition in public places and predictive alert services.
The Artificial Intelligence Act (AI Act), first introduced in 2021, aims to introduce a common regulatory and legal framework for AI. Its scope includes all sectors (except the military sector), and all types of AI. The Act would govern any product or service that uses AI systems. The bill would categorize AI systems according to four levels of risk, ranging from the least to the most unacceptable level of risk. Applications with higher risk would face stricter requirements, including more transparency and the use of accurate data.
When it comes to complex legislation such as the AI Act, the negotiation process is often a "long way to go. Previously, the Green Council BASE staff noted that the World Economic Forum described its classification system in a story this year3. The report [5] mentioned that the core of the AI bill is a classification system that identifies the level of risk that AI technologies may pose to the health and safety or fundamental rights of individuals. The framework includes four levels of risk: unacceptable, high, limited, and minimal.
-[Medium to Low Risk] AI systems with limited and minimal risk, such as spam filters or video games, can be used with few requirements beyond compliance with transparency obligations. Systems considered to pose unacceptable risk, such as government social scoring and real-time biometric systems in public places, are prohibited with few exceptions.
-- [High Risk] And high-risk AI systems, while permissible, are subject to compliance by developers and users, requiring rigorous testing, appropriate data quality documentation, and detailed human oversight accountability frameworks. AI considered high-risk includes, among others, self-driving vehicles, medical devices and critical infrastructure machinery. Self-driving vehicles, medical devices and critical infrastructure machinery, for example, are subject to rigorous testing, proper documentation of data quality and a detailed human oversight accountability framework. In addition, the bill introduces a stricter regulatory regime for high-risk AI applications, defining the concepts of base and generative AI models and introducing different obligations for their respective specific applications.
-[Generic] provides for "generic AI". The proposed legislation also outlines provisions for so-called generic AI, i.e., AI systems that can be used for different purposes with varying degrees of risk. These technologies include large scale language model generation AI systems such as ChatGPT.
--[Unacceptable] The bill prohibits specific applications, such as manipulative technology and social scoring, which are considered to pose an unacceptable risk. the AI bill also prohibits the use of facial recognition and emotion recognition software in law enforcement, border management, the workplace, and education.
-- [Penalties] The penalties are severe, with fines of up to €30 million or 6% of global revenues for companies. False or misleading documents submitted to regulators can also be subject to fines. Once the European Parliament adopts its own position on the legislation, negotiations between the EU institutions will begin to finalize and implement the law.
In addition, I note that the AI Act clarifies the obligations of EU member states with respect to the use of artificial intelligence and specifies the definition of "artificial intelligence systems".
Moreover, like the "ethics committees" that have been established in China, the EU AI bill also proposes to establish a "European AI Committee" to oversee the implementation of the regulations and ensure uniformity of application within the EU.
Overall, this is a flagship piece of legislation designed to regulate the potential hazards of AI. John Laux, an expert at the Oxford Internet Institute, described the bill as a "risk management system for AI.
With two committees of the European Parliament having approved the "AI Bill" in a vote on May 11, it is understood that the move has paved the way for adoption by the European Parliament's plenary session in mid-June.
Background: The following is extracted from the official website of AI Act
(1) What is the EU AI Act?
The EU AI Act is a proposal for artificial intelligence (AI), the first AI law to be introduced by a major global regulator. The bill classifies AI applications into three risk categories. First, applications and systems that pose unacceptable risks are prohibited. Second, high-risk applications, such as resume scanning tools used to rank job applicants, would be subject to specific legal requirements. Finally, applications that are not expressly prohibited or classified as high-risk are mostly unregulated.
(2) Why is this bill of concern?
AI apps influence what you see online by predicting what content is appealing to you, capturing and analyzing data from faces to enforce laws or personalize ads, and also for diagnosing and treating cancer. In other words, AI affects many aspects of your life. Like the 2018 EU General Data Protection Regulation (GDPR), the EU AI bill could become the global standard for determining the extent to which AI positively or negatively impacts lives anywhere you are. EU AI regulations have already caused an international stir. in late September 2021, the Brazilian Congress passed a bill on the legal framework for AI that still needs to pass the country's Senate.
(3) How can the EU AI bill be improved?
There are several loopholes and exceptions in this proposal. These flaws limit the bill's ability to ensure that AI has a positive impact on your life. For example, the use of facial recognition technology by police is currently prohibited unless the image comes with a delay or the technology is used to find a missing child.
In addition, the law lacks flexibility. If a dangerous AI application is found to be used in an unforeseen area after two years, the law does not provide a mechanism to flag it as "high risk.
Comments0