Creative confidence is key: How generative AI skills drive workplace success
The study shows that GAIL has a direct and significant impact on job performance. This link is not merely technological, it is deeply psychological. Employees who scored higher on the GAIL scale reported better performance outcomes, but the reason for this improvement was their increased confidence in using generative AI tools creatively.

Generative AI literacy significantly enhances employee job performance, primarily by boosting their confidence in creative problem-solving, according to a recent study from the Renmin University of China. Titled “Generative Artificial Intelligence Literacy: Scale Development and Its Effect on Job Performance” and published in the journal Behavioral Sciences (2025), the study provides the first validated measurement scale for generative AI literacy, or GAIL, in the workplace.
With the rapid proliferation of generative AI tools like ChatGPT, DALL·E, and Midjourney, organizations are increasingly seeking employees who can use these systems productively and ethically. This research not only defines the critical components of AI literacy but also reveals how psychological factors like creative self-efficacy mediate its influence on real-world job outcomes. The study utilized dual datasets from both private (a Chinese FinTech firm) and public (a government office) sectors, validating the universality and reliability of its framework.
What is generative AI Literacy and why does it matter?
Generative AI Literacy (GAIL) is defined in the study as an employee's ability to understand, evaluate, and creatively use generative AI tools while adhering to ethical standards. To measure GAIL, the researchers developed a five-factor scale encompassing technical proficiency, prompt optimization, content evaluation, innovative application, and responsible use.
Each of these five dimensions reflects essential competencies required to effectively deploy generative AI in professional environments. Technical proficiency covers basic tool operation, while prompt optimization pertains to crafting inputs that yield useful AI-generated content. Content evaluation assesses an individual’s ability to judge the accuracy, reliability, and originality of AI outputs. Innovative application reflects the creative use of AI tools to solve real-world tasks. Finally, responsible use gauges an individual’s understanding of ethical issues including bias, fairness, and data privacy.
The study found that these five components together form a coherent construct of generative AI literacy that can be consistently measured and tracked. Validation through exploratory and confirmatory factor analysis ensured that this literacy scale holds statistically across different workplace contexts. This enables future researchers, HR departments, and policy designers to quantify AI readiness and target skills development in a systematic manner.
How does generative AI literacy influence employee performance?
The study shows that GAIL has a direct and significant impact on job performance. This link is not merely technological, it is deeply psychological. Employees who scored higher on the GAIL scale reported better performance outcomes, but the reason for this improvement was their increased confidence in using generative AI tools creatively.
Creative self-efficacy, or the belief in one's ability to produce original and effective ideas, served as a key mediating variable. Employees with strong generative AI skills were more confident in experimenting with new solutions, automating tedious tasks, or producing novel content. This confidence, in turn, translated into higher quality work and better problem-solving capabilities.
The relationship was tested using structural equation modeling and mediation analysis. Both private and public sector samples showed consistent patterns: high GAIL scores led to higher creative self-efficacy, which then led to better job performance. This empirical evidence confirms the relevance of soft psychological factors, like mindset and self-belief, in mediating the benefits of digital skills in today’s evolving workplaces.
Furthermore, the study situates these findings within the widely accepted Ability–Motivation–Opportunity (AMO) model. GAIL is framed as the “ability” that fuels motivation (through creative self-efficacy) and is actualized when employees have the opportunity to apply these tools meaningfully. This perspective provides a theoretical framework for integrating AI literacy into organizational talent strategies.
What are the implications for businesses and public sector institutions?
The implications of this research are far-reaching. As generative AI tools become ubiquitous across industries, from design and content creation to financial modeling and customer service, there is a growing need for structured training and assessment mechanisms. This study provides a validated instrument that organizations can use to assess AI readiness among their staff and to design targeted upskilling programs.
For human resource professionals, the five-factor GAIL scale offers a roadmap to evaluate both technical and ethical AI competencies. Recruitment strategies can incorporate GAIL assessments, and onboarding programs can be tailored to elevate specific dimensions, such as prompt optimization or ethical awareness.
Public institutions stand to benefit equally. The government sample in the study confirms that AI literacy is just as relevant outside the private sector, particularly as public agencies adopt AI tools for administrative efficiency, policy analysis, and service delivery. The research encourages governments to develop AI literacy curricula for civil servants to improve innovation and responsiveness in public administration.
Additionally, the emphasis on creative self-efficacy highlights the importance of organizational culture. It’s not enough to provide access to AI tools, managers must cultivate environments where experimentation and creativity are encouraged. Training programs should aim not only to improve technical skillsets but also to build psychological confidence, enabling employees to trust their ability to engage with generative AI systems in meaningful ways.
On a broader policy level, this study supports the integration of AI literacy into national digital competence frameworks and lifelong learning agendas. As generative AI continues to disrupt labor markets, equipping the workforce with both the technical and ethical capabilities to engage with these tools will be essential for inclusive and sustainable growth.
- FIRST PUBLISHED IN:
- Devdiscourse