AI in people management moving from operational tool to strategic framework

While management-related clusters center on practical AI applications in HR analytics, leadership theory, and strategic transformation, technical clusters remain focused on machine learning algorithms and multi-agent systems with little crossover. For example, HRM-related works such as Tambe et al. (2019) and Marler and Boudreau (2017) emphasize the operational challenges of AI in human-centered environments, while technical papers like those of Hou et al. (2009) focus on autonomous consensus algorithms in robotics.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-05-2025 10:23 IST | Created: 29-05-2025 10:23 IST
AI in people management moving from operational tool to strategic framework
Representative Image. Credit: ChatGPT

A new bibliometric analysis reveals an evolving intellectual landscape at the intersection of artificial intelligence (AI), human resource management (HRM), and leadership studies. Published in Business Research Quarterly under the title “The Present and Future of Artificial Intelligence in People Management Research: A Bibliometric Approach,” the study by Robin Bauwens and Saša Batistič highlights how AI integration is reshaping people management scholarship and practice, ushering in a new era of interdisciplinary convergence.

Drawing on 863 primary and over 42,000 secondary documents from the Web of Science, the authors use document co-citation and bibliographic coupling techniques to map the current state and projected trajectory of AI applications in HRM and leadership. Their findings illuminate not only the fragmentation of the present discourse but also a future characterized by strategic convergence and methodological innovation.

How is the current research on AI and people management fragmented?

The study’s co-citation analysis identified six major clusters of research activity that reveal a splintered academic discourse. While management-related clusters center on practical AI applications in HR analytics, leadership theory, and strategic transformation, technical clusters remain focused on machine learning algorithms and multi-agent systems with little crossover. For example, HRM-related works such as Tambe et al. (2019) and Marler and Boudreau (2017) emphasize the operational challenges of AI in human-centered environments, while technical papers like those of Hou et al. (2009) focus on autonomous consensus algorithms in robotics.

One key insight is the stark disciplinary divide: HRM-focused papers rarely reference leadership insights, and leadership studies are largely detached from AI-specific HR applications. This misalignment undermines the potential for a holistic people management strategy that integrates technological capability with human-centric leadership.

The leadership domain is largely rooted in foundational theories like charismatic leadership and upper echelons theory, while HRM discourse is dominated by analytics and human–AI interaction themes. Notably, the technical clusters in the co-citation map, particularly those on machine learning and multi-agent systems, are isolated from these human-focused themes, indicating limited interdisciplinary synthesis to date.

What will the future of AI research in people management look like?

The bibliographic coupling analysis, which tracks emerging research fronts, suggests a structural realignment is on the horizon. Four future-facing clusters were identified: leader-follower control in multi-agent systems, cooperative control in agent-based environments, machine learning and optimization algorithms, and AI in management. Notably, the “AI in Management” cluster shows increasing overlap with HRM, marketing, leadership, and organizational behavior, hinting at a breakdown of existing silos.

This convergence is set to bring strategic HRM and leadership under one conceptual roof. Research is increasingly addressing challenges like AI-induced job crafting, employee perceptions, e-leadership, and the use of machine learning for strategic HRM configurations. The authors highlight the potential for machine learning algorithms like particle swarm optimization to simulate HRM system behaviors and explore how bundles of HR practices interact under AI influence.

The future also points to AI not just as a management tool but as a methodological asset. Papers are now exploring AI-driven methods like reinforcement learning, neural networks, and unsupervised clustering to study leadership behaviors, HR metrics, and performance systems in simulated environments. This indicates a shift from using AI solely for operational gains toward its role in theory-building and research design.

What strategic implications does this hold for organizations and scholars?

Bauwens and Batistič argue that a more integrated and strategic approach to AI in people management is urgently needed. They recommend that organizations adopt a system-wide AI strategy that spans technical capability, ethical governance, leadership support, and human development. This includes training leaders to communicate AI’s value, updating HR systems to reflect AI-enhanced roles, and ensuring fairness and trust across AI applications.

At the research level, the study calls for greater attention to strategic HRM systems rather than isolated practices. Most recent work has focused narrowly on recruitment algorithms and individual AI use cases, neglecting the interdependent nature of HRM bundles. Future studies should leverage optimization algorithms and machine learning to refine these complex systems and understand their causal effects on workforce behavior and performance.

The study also highlights the need to embrace e-leadership models and simulate leadership dynamics using AI. With AI already performing supervisory functions in some organizations, understanding how leadership behaviors can be replicated, enhanced, or supported by machines becomes essential. Multi-agent system research, common in robotics and engineering, could provide simulation-based insights into distributed leadership, adaptability, and team-based decision-making.

Ethics remains another vital thread. The authors caution that without interdisciplinary collaboration across HR, marketing, and IT, ethical lapses in AI deployment, such as bias and opacity, could undermine legitimacy. This concern is especially timely given regulatory developments like the EU’s Artificial Intelligence Act, which imposes compliance obligations on organizations deploying AI for workforce management.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback