High-impact vs high-risk AI: Key divide between South Korea and EU rules


COE-EDP, VisionRICOE-EDP, VisionRI | Updated: 14-05-2026 17:09 IST | Created: 14-05-2026 17:09 IST
High-impact vs high-risk AI: Key divide between South Korea and EU rules
Representative image. Credit: ChatGPT

A new comparative legal analysis claims that South Korea’s new artificial intelligence (AI) law may lower compliance costs and protect domestic innovation, but it also risks creating legal uncertainty, weak accountability and gaps in fundamental rights protection. Published in Laws, the research finds that Korea’s innovation-friendly AI Basic Act and the European Union’s stricter AI Act reflect sharply different regulatory philosophies at a time when governments are racing to govern generative AI without stalling digital transformation.

The study, titled “Regulatory Governance of AI in the Generative AI Era: A Comparative Study of South Korea’s AI Basic Act and the EU AI Act for Sustainable Digital Transformation,” compares South Korea’s Framework Act on Artificial Intelligence, enacted in January 2025 and set to take effect in January 2026, with the EU AI Act, which took effect in August 2024, focusing on the difference between Korea’s “high-impact” AI model and Europe’s “high-risk” AI framework, the legal status of self-regulation, and the risk that civil liability could be shifted away from AI operators and onto frontline workers.

Korea chooses flexibility as EU builds a preventive AI regime

The EU has moved toward a preventive, rights-centered framework built around risk classification, mandatory obligations and strong penalties. South Korea, on the other hand, has chosen a hybrid model that combines industrial promotion with safety and trust measures, reflecting its need to protect both domestic AI firms and citizens exposed to global AI platforms.

The EU AI Act treats high-risk AI systems as products that must meet defined obligations before or during market entry. These obligations include risk management, data governance, technical documentation, logging, transparency, human oversight and robustness. The framework directly lists high-risk areas, including biometrics, education, employment, essential services, law enforcement, migration control and justice administration. This gives companies a clearer understanding of whether their systems fall inside the law.

South Korea’s AI Basic Act follows a different path. It is not only a regulatory law. It is also an industrial promotion law. Its provisions cover AI industry development, specialized AI zones, standardization, public sector adoption and ethics. That dual purpose makes the Korean law more flexible, but also less predictable. It tries to support national competitiveness while establishing a basic safety framework.

The most important legal difference lies in the choice of key concept. The EU uses “high-risk AI,” a term connected to measurable harm and preventive regulation. Korea uses “high-impact AI,” a broader and more neutral term. That choice helps avoid the stigma attached to labeling a system “risky,” and it may reduce barriers for startups and local technology firms. However, it also creates uncertainty because “impact” is harder to measure than risk.

The Korean approach allows more room for administrative interpretation. The specific scope and criteria for high-impact AI are largely delegated to presidential decrees and guidelines. This could help regulators respond quickly to fast-changing technology, but it raises rule-of-law concerns. If businesses cannot predict from the law itself whether their AI systems are covered, compliance becomes uncertain and rights protection becomes weaker.

The difference is especially important in generative AI. Systems that produce text, images, audio, video or code may also affect employment, healthcare, finance, education or public services. A single system could therefore fall under both generative AI rules and high-impact AI rules. The Korean law does not clearly resolve how overlapping obligations should work. That gap could create confusion for companies and affected individuals.

Korea appears to have chosen a lighter, more innovation-friendly system to prevent domestic firms from being overburdened at a critical stage of AI development. But the price is a weaker legal structure for clarity, accountability and remedies when AI causes harm.

Self-regulation could weaken accountability without stronger legal safeguards

The Korean law relies heavily on self-regulation and ex-post management. Instead of requiring a full pre-market conformity assessment for high-impact AI, it places much of the burden on businesses to manage risks, notify users, maintain human oversight and create procedures for objections. Many of these duties are framed as responsibilities or effort-based obligations rather than hard legal requirements backed by strong sanctions.

Self-regulation can work when companies have strong incentives to comply, when independent oversight exists, and when violations lead to meaningful consequences. Without those safeguards, self-regulation can become a way for companies to set their own standards while avoiding binding accountability.

The EU model also uses self-regulation, but in a different way. Codes of conduct, harmonized standards and industry rules operate within a framework of mandatory duties. Compliance with recognized standards can help companies show conformity, but it does not replace legal obligations. In Korea, by contrast, self-regulation carries more of the substantive regulatory burden.

Guidelines are another concern. In principle, guidelines are soft norms and do not have the force of law. In practice, however, companies may treat them as binding if agencies use compliance with guidelines in procurement, licensing, certification or public support decisions. This can create a blurred legal environment in which rules are not formally binding but become unavoidable in practice. That blurring matters because core regulatory questions may be left to administrative guidance rather than legislation. If high-impact AI standards, safety expectations and compliance methods are shaped mainly through guidelines, the executive branch could effectively define rights and obligations without sufficient parliamentary control, raising concerns about legal certainty and democratic oversight.

Institutional capacity is another missing piece. Effective self-regulation requires independent supervision, third-party audits, collective redress mechanisms and broad stakeholder participation. The Korean framework is centered on the Ministry of Science and ICT, which also has a role in promoting the AI industry. That creates the risk of institutional conflict between industrial support and regulatory enforcement.

The absence of a strong independent AI oversight body could weaken public trust. If the same governance structure is responsible for advancing AI and supervising its risks, affected people may doubt whether complaints and harms will be handled impartially. This is particularly important in sectors such as hiring, credit, welfare, education and healthcare, where AI decisions can affect life chances.

The research also points to the problem of regulatory capture. When industry has more technical knowledge than regulators, it can shape standards in its own favor. This is a familiar challenge in fast-moving technology markets. AI makes the problem sharper because systems are technically complex, often opaque and difficult for outsiders to audit.

A further gap concerns affected persons. Korea’s law recognizes people who may not directly use an AI system but are affected by its decisions. This is a strong and progressive concept. Job applicants screened by AI, loan applicants assessed by AI, or welfare recipients reviewed by AI could all fall into this category. However, recognition alone is not enough. Without clear rights to explanation, objection, access and remedy, the category may not deliver practical protection.

Liability risks may shift from AI firms to frontline workers

One of the most serious concerns involves the way responsibility may be assigned when human beings remain involved in AI-supported decisions. Korean law includes a human intervention exception, meaning that if a person intervenes and controls the final decision, the system may avoid classification as high-impact AI in some situations. Theoretically, this encourages human oversight, but practically, it could create a legal escape route.

A company may design an AI system so that a doctor, recruiter, bank officer or public official clicks the final approval button. If harm occurs, responsibility may be pushed onto that frontline practitioner, even if the AI system shaped the decision and the practitioner had little real ability to challenge it. The analysis describes this as a liability lightning rod problem.

The danger is most visible in AI-driven recruitment. For instance, if a recruitment system produces biased recommendations and a human HR employee formally approves the final decision, the company could argue that the human made the decision. The affected job applicant would then face the difficult task of proving bias, causation and negligence, even though the key information about the model, data and design remains with the company.

This problem is further intensified by information asymmetry. Victims of AI harm often do not know how a system was trained, what data it used, how it weighted variables, or why it produced a specific decision. Under ordinary civil liability rules, they may still have to prove that the AI system was defective, that the operator was negligent, and that the harm was caused by that system. That burden can make legal remedies nearly impossible.

The EU has moved toward stronger liability tools, including discussion of causation presumptions and product liability updates for AI systems. Korea’s AI Basic Act does not yet provide comparable modernization of civil liability. Without such changes, affected individuals may have rights in theory but limited remedies in practice.

The research proposes a five-step legislative model for Korea:

  • Place the core criteria for high-impact AI directly in the law, including sectors, autonomy of decision-making and the scale of affected individuals. This would reduce uncertainty and address concerns over excessive delegation to subordinate rules.
  • Create periodic re-evaluation of high-impact AI categories. AI risks change quickly, and a static list can become outdated. A review cycle, supported by an expert committee, would allow regulators to add or remove categories based on evidence.
  • Strengthen parliamentary oversight. Any expansion of high-impact AI categories should be reported to the National Assembly before taking effect, with room for objection. This would give democratic control over decisions that affect business freedom and fundamental rights.
  • Gradually convert some effort-based responsibilities into mandatory obligations. Risk management and user notification could become binding after an initial transition period, followed later by stronger requirements for human oversight and objection procedures. This staged model would give companies time to adjust while still moving toward enforceable safeguards.
  • Introduce harmonized safety standards and a presumption of conformity. Companies that comply with approved standards could be presumed to meet certain legal obligations. This would make self-regulation more credible by tying it to recognized benchmarks rather than vague guidance.

The analysis also calls for civil liability reform. A presumption of causation could help victims when harm results from high-impact AI. Liability could be allocated more clearly across developers, deployers and practitioners. Collective dispute mediation could also help when many people are harmed by the same algorithm but individual lawsuits are too costly.

The findings also come with important limitations that shape how the results should be interpreted. The analysis is a legal and comparative study, not an empirical measurement of how the Korean AI law will operate once implemented. Its concerns about liability shifts, regulatory uncertainty and self-regulation failures identify structural risks in the legal design, but actual enforcement will depend on future decrees, guidelines, court decisions and institutional practice.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback