SEOUL – In a landmark move for global technology governance, South Korea has become the first nation to implement a comprehensive artificial intelligence regulation law, with the legislation taking full effect on Thursday. This groundbreaking act includes specific provisions aimed at controlling deepfakes and establishing a framework for AI safety and innovation.
Pioneering Legislation in the AI Era
President Lee Jae Myung announced the enforcement of the AI Basic Act, marking a significant step in South Korea's ambition to position itself among the world's top three AI powers, alongside the United States and China. The country, renowned for its memory chip giants Samsung and SK hynix, is now setting a precedent in regulatory oversight for emerging technologies.
Key Provisions and Requirements
The law mandates that companies provide users with advance notice when their services or products utilize generative AI. Additionally, it requires clear labeling of content, such as deepfakes, that may be difficult to distinguish from reality. According to the Ministry of Science and ICT, the act, which was passed in December 2024, aims to "establish a safety- and trust-based foundation to support AI innovation."
Violations of the new regulations can result in fines of up to 30 million won (approximately $20,400). South Korean media has highlighted this as the world's first comprehensive AI regulation law to take effect, with the ministry describing it as the second of its kind globally to be enacted.
Global Context and Comparisons
While the European Parliament adopted what it calls the "world's first rules on AI" in June 2024, those regulations are being phased in gradually and will not be fully applicable until 2027. However, the European Union has already allowed regulators to ban AI systems considered to pose "unacceptable risks" to society under its Artificial Intelligence Act, including real-time identification in public spaces or criminal risk assessments based solely on biometric data.
South Korea's Strategic AI Investments
In support of its regulatory efforts, South Korea has committed to tripling its spending on artificial intelligence this year. The new legislation designates ten sensitive fields—such as nuclear power, criminal investigations, loan screening, education, and medical care—that are subject to enhanced requirements for AI transparency and safety.
Addressing Skepticism and Future Directions
Lim Mun-yeong, vice chairman of the presidential council on national AI strategy, acknowledged that "sceptics fear the regulatory consequences of the law's enactment." He emphasized that the nation's transition toward AI is still in its infancy, with insufficient infrastructure and systems, and called for an acceleration of AI innovation to explore an unknown era. The government has indicated that it will monitor the situation and may suspend regulations if necessary, responding appropriately to evolving needs.
Deepfakes and Global Concerns
Deepfakes have recently regained global attention following incidents involving Elon Musk's Grok AI chatbot, which faced outrage and bans in several countries for enabling the generation of sexualized images of real people, including children. In response, South Korea's science ministry stated that applying digital watermarks or similar identifiers to AI-generated content is a "minimum safety measure to prevent the misuse of technology," including manipulated videos and deepfakes, noting that this is already a global trend adopted by major international companies.
International Regulatory Developments
In a related move, California signed a landmark law regulating AI chatbots in October, defying White House efforts to leave such technology unchecked. This action followed revelations about teen suicides linked to chatbot interactions, prompting requirements for operators to implement critical safeguards and allowing lawsuits in cases of negligence leading to tragedies.
As South Korea leads the way with its comprehensive AI regulations, the global community watches closely, balancing the need for innovation with the imperative of safety and ethical standards in the rapidly evolving field of artificial intelligence.