當期出版
頁數:119 ﹣156
人工智慧全面監管時代來臨?—剖析歐盟人工智慧法之光與影
Is the Era of Comprehensive Regulation of Artificial Intelligence Coming?—Analysis of Light and Shadow of the EU’s Artificial Intelligence Act
人工智慧、歐盟、人工智慧法、風險管制基準、人工智慧系統、通用人工智慧模 型、布魯塞爾效應、監管失調
Artificial Intelligence, European Union, Artificial Intelligence Act, Risk-based Approach, AI System, General-purpose Artificial Intelligence Model, Brussels Effect, Regulatory Misalignment
全球正處於應否針對人工智慧進行監管以及合適作法的討論浪潮,在風險漸增下,人工智慧的監管思維已由初期的自律為主,逐步向他律靠攏並有轉為以他律為重之趨勢,而 2024 年 8 月正式生效的歐盟「人工智慧法」(AIA),成為人工智慧全面監管的里程碑式立法。歐盟甚早便確定推動監管專法,立於「風險管制」基準將人工智慧系統應用上可能衍生的風險,劃分為:1、無法接受的風險;2、高度風險;3、有限風險;及4、最小風險或無風險等四個級別,並按風險級別的高低設定其規範密度,同時在法案研議後期,加入現時備受關注的生成式人工智慧/通用人工智慧模型 之規範。受歐盟影響,現時已有若干國家刻正推動相近立法。然而全面性監管專法是 否為人工智慧之治理良器,並不乏爭論,歐盟「人工智慧法」本身難以褪去受詬病之處,除可能導致「監管失調」情形,亦可能衍生料想之外的負面作用。此外,執政者 所採取的監管舉措極可能落入科林格里奇困境,亦未必可憑之解決人工智慧實務應用衍生的所有問題,使得當前仍有國家對人工智慧之全面監管採取保留態度。儘管現時國際上存在著多樣化的人工智慧治理作法,但不同方法之間已可窺見共通之處,無論是受歐盟人工智慧法所產生的布魯塞爾效應之影響、選擇制定全面性專法,抑或擬保持監管彈性而採取軟法機制,異中求同應是人工智慧治理推動上可預見之必然走向。
The world is in a wave of discussion on whether Artificial Intelligence (AI) should be regulated, and the appropriate regulatory approach for AI. Under the increasing risk, the regulatory thinking corresponding to AI has gradually changed from the initial soft law mechanism to the hard law mechanism. The European Union’s Artificial Intelligence Act (AIA), which came into force in August 2024, has become a landmark legislation for the comprehensive regulation of AI. According to the “risk-based approach”, the AIA classifies the risks that may arise from the application of AI Systems into: 1. Unacceptable risks; 2. High risk; 3. Limited risk; and 4, Minimal risk or no risk four levels, and according to the level of risk to set its subject to the norms. The European Union also added regulations on General-purpose Artificial Intelligence in the later stages of the AIA discussion. Influenced by the European Union, several countries are now promoting similar legislation. However, there is debate about whether comprehensive regulatory legislation is a good governance tool for AI. The AIA itself has been criticized for several things, including the possibility of regulatory misalignment and the possibility of negative effects that could not be foreseen in advance. In addition, regulatory legislation toward AI is also likely to fall into the so called “Collingridge Dilemma”, and may not be able to solve all the problems derived from the practical application of AI, so that there are still countries to adopt a reservation about the comprehensive regulation of AI. Although there are a variety of AI governance practices in the world, there are some common points between different approaches, and seeking common ground in different approaches should be the inevitable trend of AI governance promotion.
