What is South Korea’s AI Basic Act, the world’s first fully enforced AI law? | World News

After the European Union introduced the AI Act in 2024, South Korea introduced the AI Basic Act on January 22, a new set of laws aimed at regulating artificial intelligence.Billed as one of the most comprehensive sets of laws anywhere in the world, the AI Basic Act could serve as an inspiration for countries around the world, but it looks like the legislation is already facing backlash.
The AI Basic Act, which requires companies to label any AI-generated content, has been criticised tech startups in South Korea, but civil society groups say the law is too soft.
What exactly is South Korea’s AI Basic Act?
In a nutshell, the AI Basic Act will require companies that use AI to add invisible digital watermarks to AI-generated content, such as cartoons and artwork. The law also requires realic deepfakes to have a visible label.
South Korea’s AI act also requires “High-Impact AI” systems like the ones used in medical diagnosis and hiring and loan approvals to be accessed operators to conduct risk assessments and explain how decisions are made. Also, if the last decision is made a human, it may fall outside of the law’s scope.Story continues below this ad
The AI Basic Act also states that companies that break the law will have to pay fines up to 30 million won, with the government giving a grace period of at least a year before paying the penalty.
The South Korean government also states that the law mostly focuses on promoting the AI industry instead of restricting it, but the bar is set so high that officials acknowledge that currently no models current meets its standards.
Why are South Korea’s AI startups opposing it?
Companies operating in South Korea must determine if their systems are categorised as high-impact AI, a process that takes a lot of time and creates uncertainty about whether it will meet the standards set the country.
Another factor is that foreign firms in the country will have an upper hand over local AI companies, as all South Korean companies must meet the stringent requirements set the AI Basic Act, while those based out of Korea, like Google and OpenAI, are required to meet more relaxed thresholds.Story continues below this ad
South Korea’s AI Basic Act is also labelled as the “world’s first” law to be fully enforced the country. According to The Guardian, Alice Oh, a computer professor at the Korea Advanced Institute of Science and Technology (KA) admits that the law is far from perfect, but its intention was to encourage use while stamping down on innovation.
But a survey the Startup Alliance conducted in December last year suggests that 98% of AI startups in South Korea were not prepared for compliance. “There’s a bit of resentment. Why do we have to be the first to do this?”, says Lim Jung-Wook, the co-head of Startup Alliance.
Why are civil groups unhappy with the AI Basic Act?
According to a report Security Hero, a US-based identity protection firm, almost 53% of global deepfake porn victims are from South Korea. In 2024, an investigation exposed a huge network of Telegram chatrooms that were engaged in generating and dributing AI-generated sexual images of women and girls.
The AI Basic Act was submitted in the South Korean parliament back in 2020, but faced backlash as civil society groups said that it preferred industry interests over protecting citizens. Story continues below this ad
A day after the law was enforced, four organisations, including a group of human rights lawyers, issued a statement saying that it did not have any provisions to protect citizens from risk.
The group said that the law has provisions for “users”, but the term referred to hospitals, public institutions using AI, and financial companies, but did not say anything about people affected AI.
South Korea’s human rights commission also criticised the law, saying that it was vague about the definition of “high-impact AI” and that those in regulatory blind spots are likely to suffer rights violations.
To this, South Korea’s Minry of Science and ICT said it expects the AI Basic Act to “remove legal uncertainty” and work on a “healthy and safe domestic AI ecosystem.”


