Citizen attitudes on AI e-government: Support high, risks remain
The findings make clear that while AI can improve satisfaction and efficiency, governments cannot ignore the central role of trust. Citizens expect transparency, accountability, and reassurance that automated systems are designed and overseen responsibly. Without these elements, even useful and easy-to-use services risk rejection.

Artificial intelligence is steadily entering government services worldwide, and citizens are showing both openness and concern. A new systematic review maps the landscape of public attitudes toward AI-driven digital government.
Published as From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes in Informatics, the paper synthesizes 30 empirical studies published between early 2019 and April 2025. The authors apply PRISMA guidelines to assess how citizens react to chatbots, virtual assistants, and AI-enabled e-services, asking four central questions: what governments are deploying, how researchers measure attitudes, what drives acceptance or rejection, and what concerns are most pressing.
Which AI tools are governments using
The review identifies conversational systems as the most common form of AI in the public sector. Chatbots and virtual assistants dominate deployment, often serving as first contact points for citizens navigating services. These applications range from providing passport or permit information to answering health-related questions or guiding users through application processes. AI-enabled e-services form the second major category, while a smaller set of other tools represent early-stage or niche innovations.
By mapping current use cases, the study highlights the reality that governments are starting with accessible, low-cost systems that can scale quickly and relieve pressure on human staff. These systems are positioned to automate repetitive tasks, improve information flow, and make services available around the clock. Yet the dominance of chatbots also points to the narrowness of current experimentation. Broader applications such as predictive analytics for policy planning or AI-enabled case management remain relatively underexplored in the studies surveyed.
The review stresses that this focus on conversational AI reflects governments’ preference for low-barrier entry points into automation. These systems are relatively inexpensive compared to complex predictive or decision-support tools, and they provide visible, measurable benefits quickly. Still, limiting experimentation risks slowing long-term transformation, a point the authors argue should be addressed by gradually expanding the scope of AI in digital services.
How citizen attitudes are measured and shaped
Across the 30 studies, researchers most often measure outcomes such as behavioral intention to use, satisfaction, adoption and willingness to use, actual usage, and overall acceptance. These measures reveal not only whether citizens interact with AI services but also how they perceive their quality and trustworthiness.
The review identifies four factors that consistently shape these attitudes. Perceived usefulness is the strongest driver: when citizens believe an AI system helps them complete tasks more efficiently, their acceptance increases. Ease of use is the second, as systems that are intuitive and simple to navigate build confidence and satisfaction. Trust emerges as a decisive factor in positive adoption, while perceived risk acts as a major barrier. Citizens are far less likely to embrace AI services if they feel uncertain about how their data is used or fear that automated decisions may be unreliable.
The authors point out that the field relies heavily on established theoretical frameworks such as the Technology Acceptance Model and the Unified Theory of Acceptance and Use of Technology. Most studies use survey-based methods supported by statistical tools like PLS-SEM and regression analysis. This reliance creates a comparable evidence base, allowing clear patterns to emerge across countries and service types. Eleven of the reviewed studies are classified as high quality and 19 as medium quality, giving the review both breadth and reliability.
The findings make clear that while AI can improve satisfaction and efficiency, governments cannot ignore the central role of trust. Citizens expect transparency, accountability, and reassurance that automated systems are designed and overseen responsibly. Without these elements, even useful and easy-to-use services risk rejection.
What citizens fear and what governments must fix
The strongest concerns revealed in the review are data privacy and usability. Citizens routinely express unease about who controls their information, how securely it is stored, and whether it might be misused. These fears are intensified by the opacity of many AI systems, which often fail to explain decisions in terms users can understand. The result is a tension between the convenience AI promises and the risks it poses if trust is undermined.
Usability is another major source of frustration. Poorly designed interfaces, confusing navigation, or unhelpful responses can quickly erode confidence. Even when underlying AI systems are powerful, weak user experiences leave citizens dissatisfied. This concern underscores the importance of involving citizens in design and testing phases to ensure tools meet real-world needs.
The authors recommend a series of measures to address these challenges. Human oversight should remain central, ensuring citizens know there is accountability beyond machines. Training programs for civil servants are essential to strengthen digital literacy within agencies and ensure staff can manage and explain AI systems effectively. Co-creation of services with citizens is highlighted as a strategy to align design with expectations, reduce friction, and improve usability.
The review also identifies gaps in current research. Most studies focus on chatbots and virtual assistants, leaving other applications of AI in government underexplored. Vulnerable and underrepresented groups are often missing from datasets, raising the risk of biased results. The authors argue for more inclusive research that compares attitudes across diverse populations and service types, along with greater transparency through open survey instruments and datasets.
- FIRST PUBLISHED IN:
- Devdiscourse