AI’s governance role in higher education must be strategically limited, study warns
The study highlights a crucial distinction between symbolic and sub-symbolic decision-making. Symbolic decisions require holistic design, system-wide understanding, and the ability to restructure institutions, traits associated with human reasoning. Sub-symbolic systems, by contrast, operate through decentralized, incremental optimizations, precisely where current AI systems such as machine learning and large language models excel.

A new study warns against the growing over-optimism surrounding the use of artificial intelligence (AI) in higher education governance, cautioning that unchecked reliance on AI could result not only in technical shortcomings but also in deeper moral and strategic misalignments. The findings are presented in the study titled “AI as Sub-Symbolic Systems: Understanding the Role of AI in Higher Education Governance”, published in the journal Education Sciences.
While the past decade has seen AI integrated across administrative and instructional domains in universities, the authors argue that its governance applications require critical differentiation. They present a taxonomy to classify decision types and identify where AI can assist and more importantly, where it cannot replace human oversight.
What is AI’s true nature and how does it decide?
The study highlights a crucial distinction between symbolic and sub-symbolic decision-making. Symbolic decisions require holistic design, system-wide understanding, and the ability to restructure institutions, traits associated with human reasoning. Sub-symbolic systems, by contrast, operate through decentralized, incremental optimizations, precisely where current AI systems such as machine learning and large language models excel.
Two primary AI paradigms are explored: discriminative AI, which uses labeled data to classify inputs with high accuracy, and generative AI, which identifies patterns to create new content. The former is suitable for functions with clear criteria, such as applicant filtering or quality control—while the latter has more exploratory applications like policy drafting or summarizing institutional materials.
But both paradigms fall short on two governance essentials: transparency and adaptability. Discriminative systems may embed historical biases, making them unreliable for fair decision-making. Generative systems risk hallucinations and are notoriously inconsistent when dealing with novel or unstructured problems. As such, neither is inherently suited for complex, value-laden leadership decisions in universities.
In which governance areas can AI be applied effectively?
The study maps AI capabilities onto Stafford Beer’s Viable System Model, a cybernetic framework describing five essential decision systems in any viable institution:
- System One (core operations): Classrooms, labs, and student services.
- System Two (tactical planning): Regulation and coordination across departments.
- System Three (strategic control): Institutional planning and operational oversight.
- System Four (forward planning): Scanning the environment for future opportunities and risks.
- System Five (identity and purpose): Final authority on the university’s mission and moral direction.
According to the authors, AI can support Systems One through Four, with caveats. For example, AI can identify patterns in student dropout rates across departments (System Three), optimize staff resource allocation (System Two), or monitor application trends to inform course development (System Four). Discriminative models could streamline routine decisions with historical precedence, while generative AI can help synthesize institutional documentation.
However, challenges persist. Sub-symbolic AI tends to perpetuate known patterns rather than identify paradigm shifts, meaning it could miss disruptive trends such as pandemics, societal shifts, or legal reforms. Additionally, unless AI systems are carefully supervised, they may misclassify novel problems as routine and handle them incorrectly. This “blind spot” creates risks of institutional inertia or maladaptation.
Why must human leadership retain final authority?
System Five, focused on institutional identity and moral purpose, is the domain where AI categorically fails. The authors emphasize that decisions around a university’s core values, long-term vision, and cultural identity cannot be automated. These require symbolic reasoning, moral judgment, and collective consultation, capacities beyond any machine logic.
Moreover, AI lacks the normative clarity needed to determine “right” versus “wrong” when institutional trade-offs arise. For instance, deciding whether to expand online-only education or preserve traditional face-to-face models in the post-pandemic world involves social, pedagogical, and ethical considerations. Such choices cannot be driven by algorithmic analysis alone.
The study warns that assigning such value-laden decisions to AI, especially generative systems trained on biased or shallow data, may undermine legitimacy, transparency, and stakeholder trust. Even where AI provides administrative support, universities must ensure robust human oversight, especially in auditing outputs, verifying data quality, and maintaining openness in how AI tools are deployed.
Policy implications and future directions
The paper presents two key policy takeaways for higher education institutions:
- Use both symbolic and sub-symbolic approaches strategically. Sub-symbolic AI can help incrementally improve efficiency but must operate within symbolic frameworks guided by institutional vision and human values.
- Train leaders to understand AI’s limits. University governance requires leaders who not only understand the opportunities AI offers but can also recognize its risks. They must know when to allow AI autonomy, and when to step in and override its conclusions.
The authors recommend future empirical studies on how university leaders conceptualize AI’s role in governance. Understanding the psychological, cultural, and operational factors that influence AI adoption at different governance levels will be crucial. Additionally, more research is needed to explore how institutions transition between symbolic and sub-symbolic decision modes, especially during crises or periods of transformation.
- FIRST PUBLISHED IN:
- Devdiscourse