Archives Of Social Science - ASS

Research Article

Generative AI, HR Analytics and Organizational Justice: Towards an Ethical Governance Model for Human Resources Decisions

Mignenan V1,2,*, Mahamat Ahmat M3

1Department of Management, University of Moundou, Chad
2Member of the Carrefour d’Innovation et d’Appui aux Entreprises laboratory, University of Quebec at Chicoutimi, Canada
3Department master’s degree in management sciences, Member of the Laboratory for Study and Research in Applied Economics and Management (LAREAG), University of N'Djamena, Chad
 

Corresponding Author: 

Dr. Victor Mignenan, Department of Management, University of Moundou (Chad) and member of the Carrefour d’Innovation et d’Appui aux Entreprises laboratoiry, University of Quebec at Chicoutimi, Canada. Email: mignenanvictor@univ.teluq.ca

ORCID: https://orcid.org/0000-0002-5628-1601

 

Copyright © Victor Mignenan

Citation : Mignenan V, Mahamat Ahmat M. Generative AI, HR Analytics and Organizational Justice: Towards an Ethical Governance Model for Human Resources Decisions. Arch Soc Sci. 2026;2(1):1-14.

Received Date: 07 January 2026
Published Date: 02 March 2026
Volume 2 Issue 1

Abstract

This study analyzes the influence of the integration of generative AI into human resources decision-making processes on organizational justice, highlighting the importance of the governance mechanisms necessary to ensure perceived equity. Contrary to the idea that AI automatically produces fairer decisions, the results show that perceptions of fairness are based above all on algorithmic transparency, human supervision and the organizational modalities governing the use of these technologies. The study is based on a mixed design articulating an experiment and a model of structural equations (N = 412). Qualitative data highlights an ambivalence: AI is perceived as potentially more objective but arouses mistrust when it operates without explanation. Quantitative analyses confirm that transparency enhances procedural justice (β = 0.47, p < 0.001) and trust in the system (β = 0.52, p < 0.001), while human supervision increases the acceptability of decisions (β = 0.38, p < 0.01). On the other hand, algorithmic opacity significantly reduces perceived justice (β = −0.31, p < 0.01).

On the theoretical level, this research proposes an integrated model articulating generative AI, organizational justice and ethical governance, thus contributing to filling a notable gap in the literature on algorithmic equity in HRM. It empirically demonstrates that perceived justice does not stem from the technology itself, but from the mechanisms of transparency, explainability, and oversight that guide its use. It also clarifies the perceptual processes by which employees assess the legitimacy of AI-assisted decisions, offering renewed insight into theories of procedural justice and organizational trust. Finally, it highlights the crucial role of human-AI hybridization, enriching the work devoted to socio-technical systems in HR practices.

On the managerial level, the study provides concrete guidance for the design of transparent, explainable and supervised HR decision-making systems, by identifying algorithmic transparency and human supervision as essential levers to strengthen the acceptability and legitimacy of decisions produced with AI. It highlights the increased responsibility of HR professionals as guarantors of algorithmic justice in the selection, assessment and talent management processes. Finally, it proposes an operational framework for ethical governance including the training of stakeholders, the regular audit of algorithms and proactive communication with employees in order to preserve trust and decision-making legitimacy.

Keywords

Generative AI, HR Analytics, Organizational Justice, Ethical Governance Human Resources Decisions.

Introduction

The rapid rise of generative artificial intelligence is profoundly transforming human resources management by redefining the way employee data is collected, analyzed, and interpreted. Backed by HR analytics, IAG now makes it possible to produce actionable insights at a level of scale, speed and complexity that has never been seen before. Whether it's staffing, performance evaluation, workforce planning, or performance management, these technologies are reorganizing the decision-making capacity of organizations and simultaneously reorienting the dynamics of power, control, and recognition in the workplace. This technological restructuring raises fundamental questions about organizational justice, particularly with respect to procedural fairness, algorithmic transparency, and the equitable distribution of opportunities and outcomes.

Recent literature highlights that algorithm, often designed as "black boxes", can reproduce or amplify pre-existing inequalities, especially when biases in training data remain invisible or poorly understood.1 Other work, however, indicates that well-governed and well-supervised systems can reduce some forms of human bias, particularly in selection or evaluation processes.2 Algorithmic justice thus appears as a multidimensional construct requiring the simultaneous consideration of data quality, the learning logic of the models and the organizational capacity to interpret, explain and question the results generated.3 This tension feeds a central paradox: while AI promises increased rationalization and enhanced objectivity, it also introduces new regimes of opacity, surveillance and informational asymmetry, likely to transform employees' relationship to organizational judgment.4,5

This paradox opens up a grey area that has not yet been sufficiently explored: that of the ethical governance of decisions produced or co-constructed by the AGI. Despite the abundance of work on algorithmic biases, responsible AI or the dynamics of work transformation, the interfaces between HR analytics, organizational justice and governance remain fragmented.6 Organizations are faced with the difficulty of articulating technical standards of transparency, auditability or traceability with the principles of organizational justice7 and institutional or union expectations of reliability and accountability.8 Existing conceptual frameworks remain largely focused on the technical performance of algorithms, to the detriment of the relational, cognitive, and political dimensions that shape how employees perceive and interpret the use of these tools in decisions that affect them.

In this context, this article questions how an ethical governance model can guide the use of generative AI and HR analytics in a way that consolidates, rather than weakens, organizational justice in human resources decisions. This question involves jointly examining the dynamics of algorithmic learning, the managerial logics of implementation, the expectations of perceived justice and the transformations of the relationship to work induced by decision-making automation.

The main objective of the article is to propose an integrative model of ethical governance applied to AI-augmented decisions in HRM, by articulating the contributions of AGI, the mechanisms of HR analytics and the distributive, procedural and interactional foundations of organizational justice. The analysis first aims to clarify the mechanisms by which AGI shapes perceptions of fairness in HR processes.6 It then focuses on the organizational, technological and institutional conditions likely to promote an ethical framework for these decisions. Finally, it proposes a conceptual framework to guide the design of hybrid human–AI systems capable of supporting robust organizational justice.

The article is divided into four sections. The first presents a critical review of recent work on generative AI, HR analytics, and organizational justice. The second introduces an integrative conceptual framework linking the technological, organizational and ethical dimensions of the phenomenon. The third describes the methodology used to empirically analyze this framework. The last section discusses the theoretical, managerial and socio-political implications of the proposed model and highlights the issues and future research avenues.

Literature Review

Generative AI in Human Resource Management

Generative artificial intelligence is becoming an increasingly important part of contemporary human resource management, redefining the way organizations design and implement decision-making practices. According to Davenport and Miller9, AGI is a major breakthrough since it automates not only data analysis, but also the creation of decision-making content, transforming the very nature of HRM expertise. Compared to traditional predictive systems, it can produce reasoned recommendations, generate job descriptions, analyze skills and simulate management scenarios in real time. According to Strohmeier and Piazza10, this artificial cognitive ability significantly increases the velocity and scale of decision-making.

On the other hand, this increased speed sometimes reinforces professionals' dependence on algorithmic suggestions, which can, according to Meijerink, Bondarouk and Lepak11, reduce the exercise of critical judgment and transform AGI into a true "co-decision-maker" rather than a simple assistance tool. This evolution is part of a broader socio-technical transformation of HRM. However, contrary to the proliferation of technological discourses, the empirical literature is still limited on the precise effects of this co-decision on employees' perceptions of organizational justice.

Algorithmic Bias, Opacity and Ethical Risks

The ethical risks associated with the use of generative AI are a major concern. According to O'Neil1, algorithms tend to reproduce structural biases present in training data, thereby amplifying inequalities in hiring, promotion, or evaluation. Compared to more traditional analytical approaches, AGI's systems introduce additional opacity: as models change continuously, accurate traceability of decisions becomes more difficult to establish, which Martin et al.4 refer to as "evolutionary opacity dynamics". According to Raghavan et al.2, automated CV analysis systems have already demonstrated a propensity to discriminate against certain historically disadvantaged groups, despite their alleged neutrality.12

On the other hand, several studies suggest that intelligent systems could, under certain conditions, reduce human bias. According to Manning L13, the use of high-quality data, combined with auditable algorithms, would lead to fairer decisions than those based exclusively on the subjective judgment of managers. However, this optimistic outlook remains tempered by the lack of universal standards for algorithmic transparency. Contrary to what a purely technocentric vision suggests, the literature nevertheless converges on the idea that the ethical issues related to AGI cannot be dissociated from the organizational dynamics, managerial values and institutional context in which these technologies are deployed.

Organizational Justice: Distributive, Procedural and Interactional

Organizational justice provides an essential analytical framework for understanding the effects of AGI on HR decisions.14 According to Greenberg7, it can be broken down into three dimensions: distributive justice, centered on the equity of outcomes; procedural justice, which concerns the fairness of processes; and interactional justice, linked to the quality of interpersonal interactions. Compared to traditional decision-making contexts, the introduction of generative systems simultaneously modifies these three components.

According to Colquitt et al.15, procedural justice is based on transparency, consistency, and the ability of individuals to understand the rules that guide decisions. This requirement becomes problematic when decisions emanate from an opaque generative model, difficult to explain and whose internal logics often escape the actors in the field. Meijerink and Keegan5 also point out that automation can improve the consistency of processes while reducing the perception of interactional fairness, especially when AI replaces human exchanges traditionally perceived as recognition. On the other hand, recent work shows that human-in-the-loop approaches, where algorithmic decisions are supervised, contextualized and adjusted by managers, strengthen the legitimacy and acceptability of hybrid judgments.

In contrast to the conceptual richness of the literature on organizational justice, few studies articulate in an integrated way the perceptions of equity and the mechanisms specific to generative technologies. According to several authors, there is still a lack of an explanatory framework capable of simultaneously integrating the distributive, procedural and interactional dimensions with the cognitive and technical specificities of AGI. It is precisely this theoretical deficit that this article proposes to address (Table 1).

Justice dimension

Potential positive effects

Risks and paradoxes identified

Distributive

More systematic decisions; reduction of individual preferences13

Silent reproduction of biases in the data; Amplified inequalities

Procedural

Increased consistency; Standardization

Opacity of models; limited traceability2

Interactional

Increased availability of automated feedback

Dehumanization of relationships; loss of trust5

Table 1. Effects of generative AI on organizational justice dimensions.
Source: authors' work based on literature, January 2026

The normative frameworks developed over the past decade are essential benchmarks for the ethical use of AGI. According to the OECD (2019)16, AI governance is based on five principles: robustness, transparency, accountability, fairness and human oversight. On the other hand, these principles remain general and do not always consider the specificities of HRM decisions.

According to the UNESCO Guidelines (2021)17, AI must be centred on human rights, including non-discrimination and equal access to opportunities. Compared to these global recommendations, the OBVIA (International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technology) proposes more operational frameworks, adapted to organizational contexts, with an emphasis on the auditability and explainability of models.18 However, the literature highlights the difficulty of transposing these recommendations into the daily practices of managers.

The ILO, in its reports on the future of work (2020–2023), recalls that automation must strengthen social justice, not undermine collective protections. However, these reports do not yet offer concrete mechanisms to frame the use of AGI in HRM, which leaves a normative grey area between general principles and practices in the field.

Gaps: Lack of an Integrated Framework for Ethical HRM Decisions

Despite the growing abundance of research, a major gap remains. There is no integrative framework linking AGI, HR analytics, and organizational justice from a governance perspective. Current work remains fragmented: some focus on algorithmic performance, others on governance or perceptions of justice. However, none of them proposes a coherent articulation between technical mechanisms, managerial practices and human perceptions.

Compared to rapid advances in computational sciences, management sciences are lagging behind in conceptualizing the social and organizational effects of AGI. On the other hand, avenues are emerging, particularly on the co-construction of decisions, worker participation and continuous ethical auditing.4,8 In any case, the current state of the literature justifies the need to propose an ethical governance model capable of guiding, in a holistic manner, hybrid decisions produced in HRM.

Conceptual Framework

The conceptual framework is grounded in an integrated articulation between organizational agility, AI-related justice perceptions, psychological health at work, and sustainable performance. Rather than treating agility as a purely structural capability, this model positions it as a socio-cognitive and technological capacity that shapes how artificial intelligence is implemented, interpreted, and experienced within organizations.

Organizational agility refers to an organization’s capacity to detect, interpret, and respond proactively to environmental signals in contexts of uncertainty and digital transformation.19 Beyond operational flexibility, agility represents a distributed cognitive capability embedded in managerial practices, adaptive routines, and digital infrastructures.20 In digitally mediated environments, agility influences how AI systems are deployed, adjusted, and governed.

However, agility alone does not guarantee positive outcomes. In AI-enabled workplaces, employee perceptions of justice related to AI systems become central. AI-related justice refers to the perceived fairness of algorithmic decision-making processes, including transparency, procedural fairness, accountability, and the equitable distribution of outcomes. When AI systems influence evaluation, scheduling, recruitment, or performance monitoring, employees interpret these systems through fairness lenses.

Agile organizations are theoretically better positioned to foster AI-related justice because they rely on adaptive feedback loops, participative governance, and iterative adjustment mechanisms. These characteristics can enhance transparency, allow employee voice, and reduce perceived arbitrariness in algorithmic decisions. In contrast, in rigid or poorly governed systems, AI may amplify perceptions of opacity and loss of control, negatively affecting trust.

Psychological health at work emerges as a mediating dimension in this framework. Psychological health encompasses well-being, resilience, meaning at work, and quality of social interactions.21 It is not merely an individual resource but an organizational construct shaped by management practices, supervisory styles, and digital governance structures.22 When AI is perceived as fair, transparent, and supportive, it can reduce cognitive strain and reinforce psychological safety. Conversely, perceived algorithmic injustice may intensify stress, job insecurity, and emotional exhaustion.

This dual dynamic reveals a critical paradox: agility can either buffer or amplify the psychological effects of AI implementation. While agile teams benefit from autonomy and role clarity, excessive performance monitoring or accelerated digital rhythms may generate intensification pressures.23 The fairness of AI governance becomes the regulatory mechanism that determines whether agility contributes to well-being or strain.24

Sustainable performance represents the ultimate outcome of the model. It integrates economic, social, and human dimensions.25 Sustainable performance is conceptualized as a dynamic equilibrium emerging from the interaction between adaptive capacity, perceived justice, and employee psychological health. It is not a static result but the product of ongoing alignment between technological systems, human capital, and organizational climate.26

The refined causal logic of the model can therefore be summarized as follows:

Organizational Agility → AI-related Justice Perceptions → Psychological Health → Sustainable Performance.

This articulation addresses a key gap in the literature. While previous studies have separately examined agility, digitalization, and well-being, few have integrated AI justice as a central explanatory mechanism. By positioning algorithmic fairness as the mediating bridge between agility and psychological outcomes, this framework offers a testable and theoretically coherent model aligned with contemporary challenges of AI-driven organizations.

The following table 2 summarizes the theoretical convergences and divergences observed.

Table 2. Major theoretical contributions and grey areas

Concept

Theoretical consensus

Divergences / Paradoxes

Recent references

Organizational Agility

Ability to adapt quickly; Distributed collective intelligence

Intensification of work; Time pressure

19,20,23

Psychological health

Mental balance, well-being, social support

Organizational determinants still under-documented

21,22

Long-lasting performance

Economic, social and psychological integration

Approach still too static in the literature

25,26

Table 2. Major theoretical contributions and grey areas 
Source: authors' work based on literature, January 2026

 

The causal relationship between the three dimensions can be visualized by the figure 1 below, which illustrates the integrative logic on which the model is based.

 

Figure 1. Design logic: from agility to sustainable performance

Source: authors' work based on literature, January 2026

This conceptual framework thus proposes a theoretical articulation according to which organizational agility improves psychological health, which is a central mechanism producing lasting effects on the performance of organizations. However, unlike several classical models focused solely on HR practices, the model offers a truly systemic approach, simultaneously integrating the technological, cognitive and psychosocial dynamics specific to contemporary work.

Assumptions

Hypothesis 1: Effect of Generative AI on Perceived Organizational Justice

If generative AI systems are used as decision-support tools in HR processes, then perceived procedural justice should increase, insofar as the algorithm ensures consistency and standardization of decision-making rules. According to Manning L (2023)13, AI can reduce human arbitrariness and increase the predictability of decisions, thereby enhancing procedural fairness. On the other hand, when algorithmic opacity is high, distributive and interactional justice tends to decrease, as employees perceive a loss of control and a lack of understandable justification.2,4 Compared to human decisions, AGI accentuates the tension between coherence and explainability. According to Law et al. (2022),27 organizations must offer intelligible explanations to preserve the legitimacy of decisions. Thus, the use of generative AI positively influences perceived procedural justice, but only if a minimum of explainability accompanies the process.

H1: The use of generative AI in HR decisions is positively associated with perceived procedural justice.

Hypothesis 2: Moderation through Algorithmic Transparency

If the level of algorithmic transparency is high, then the positive relationship between the use of generative AI and perceived organizational justice is strengthened. According to Shin and Park (2021), transparency helps reduce perceived uncertainty and increases trust in automated systems. Compared to opaque systems, explainable devices (Explainable AI) mitigate feelings of injustice by facilitating the understanding of decision rules.16,17 On the other hand, research shows that partial transparency can create a paradox: it improves technical understanding but reveals latent biases, thus compromising the perceived legitimacy of systems.28 However, the majority of recent studies converge transparency remains a central determinant of the social acceptability of decision-making technologies.11,14,18

H2: Algorithmic transparency positively increases the effect of generative AI on perceived organizational justice.

Hypothesis 3: Mediation of Ethical Governance 

If decisions supported by generative AI are based on robust ethical governance mechanisms, then the impact of AI on perceived organizational justice is significantly channeled through these mechanisms. According to Floridi and Taddeo (2020)29, ethical governance acts as a normative filter, ensuring that automated decisions respect the principles of justice, non-discrimination and accountability. Compared to organizations without such structures, those that incorporate algorithmic audits, transparency standards, and human oversight procedures achieve a higher level of perceived justice.8 On the other hand, the lack of governance creates a gray area where biases, errors, and undetected disparities become invisible to managers and employees alike. According to Lee and Singh (2022)30, this opacity leads to a decline in confidence and a sense of moral imbalance. In any case, recent literature confirms that ethical governance is an essential mediating mechanism between technology and organizational justice.

H3: Ethical governance positively mediates the relationship between the use of generative AI and perceived organizational justice.

By way of illustrative synthesis, the figure 2.

Figure 2. Hypothesis synthesis
Source: authors' work based on literature, January 2026

 

Methodology

The study adopts a sequential multi-method quantitative design that combines a controlled vignette experiment with a confirmatory survey-based structural model. Although the study was initially labelled as mixed-methods, it does not actually integrate qualitative and quantitative elements. Rather, it combines two distinct quantitative approaches - experimental and survey-based - within a sequential explanatory logic. Therefore, the appropriate classification is a multi-method design.31,32

The experimental phase strengthens internal validity by isolating causal mechanisms related to perceived algorithmic justice, while the subsequent structural equation modeling (SEM) phase enhances external validity and generalizability through large-scale statistical testing. This complementary use of quantitative methods aligns with recommendations for studying emerging digital phenomena where both causal inference and structural validation are required.33,34

Given the complexity of perceptions surrounding algorithmic decision-making, this multi-method approach allows for controlled causal testing while validating broader nomological relationships across constructs.

To enhance methodological transparency, the research design explicitly details sampling procedures, vignette randomization protocols, manipulation checks, ethical approval processes, and statistical controls for common method bias.

Research Design

The study unfolds in two sequential phases. The first phase consists of an experimental vignette study in which participants were randomly assigned to one of three human resource decision-making scenarios: a fully human decision, an opaque AI-assisted decision, or an explainable hybrid AI–human decision. Randomization was implemented using Qualtrics’ built-in random assignment function, ensuring equal probability of exposure to each condition and minimizing selection bias.

To verify the effectiveness of the experimental manipulation, participants responded to three manipulation-check items assessing the perceived role of AI, the transparency of the decision process, and the extent of human involvement. Independent sample t-tests revealed statistically significant differences across experimental conditions (p < .001), confirming that participants accurately perceived the intended scenario distinctions and validating the integrity of the experimental design.

Population and Sample

The target population includes employees, managers, and graduate-level industrial relations students with prior exposure to digital HR systems. A stratified convenience sampling strategy was employed to ensure heterogeneity across sectors - public, private, and nonprofit—and varying levels of experience with AI tools.

Participants were recruited through professional LinkedIn networks, HR departments of partner organizations, graduate student associations, and executive education programs. Eligibility criteria required participants to be at least 18 years old, possess a minimum of one year of professional experience, and have used at least one digital HR tool. Responses were excluded if more than 20% of data were missing or if attention-check items were failed.

The final sample consisted of 120 participants in the experimental phase and 350 participants in the quantitative validation phase. A priori power analysis conducted using G*Power 3.1 indicated that a sample size of 350 provides statistical power of .90 to detect medium effect sizes in SEM models including moderation effects (f² = 0.15), thus ensuring adequate analytical sensitivity (Table 3).

Variable

% / Value

Description

Average age

32.7 years

18–57 years old

Gender

51% women / 47% men / 2% other

Balanced diversity

Industry

38% public; 44% private; 18% NGOs

Sectoral heterogeneity

Experience in HR

0–20 years (M = 5.3)

61% having used a digital HR tool

AI Experience

41% low; 46% moderate; 13% strong

Variation for moderation

Table 3. Sample characteristics (quantitative phase, N = 350)
Source: authors' work based on literature, January 2026

 Measuring instruments

Key variables are measured using internationally validated psychometric scales, adapted to the context of generative AI. The items are based on 7-point Likert scales, in line with the recommendations of Podsakoff et al. (2012)35 to reduce method bias.

Instruments include:

  • Perceived organizational justice36,37
  • Algorithmic Transparency
  • Trust in AI38
  • Technological acceptability (TAM2/UTAUT2)39
  • Perceived ethical governance4,18

The following table 4 presents the constructions, typical items and sources.

Concept

Example of items

Source

α expected

Procedural justice

"The decisions seem to be based on coherent rules"

15

.85–.92

Justice distributive

"The result obtained is proportional to my contribution"

15

0.86

Interactional justice

"The explanations are given with respect"

15

0.88

Algorithmic transparency

"I understand how AI arrives at its recommendations"

Shin & Park (2021)

0.8

Trust in AI

"I believe that AI acts reliably"

38

0.87

Ethical governance

"The organization monitors the fairness of automated decisions"

4

0.83

Acceptability

"I would use this system regularly"

39

0.91

Table 4. Measurement scales used
Source: compilation of literature, September 2025

Collection Methods

Validated psychometric scales were adapted to the context of generative AI decision-making. All constructs were measured using seven-point Likert scales. In line with recommendations by Podsakoff et al. (2012)35, several procedural remedies were implemented to reduce common method variance. These included psychological separation between predictor and outcome variables, randomization of item order, assured anonymity, neutral wording of items, and clear segmentation of questionnaire sections.

Analytical Methods

Multiple statistical tests were conducted to assess potential common method bias. Harman’s single-factor test indicated that the first unrotated factor accounted for 28.4% of total variance, well below the 50% threshold, suggesting that common method variance was not dominant. Full collinearity assessment following Kock (2015)40 showed that all variance inflation factor (VIF) values were below 3.3, indicating absence of pathological collinearity. In addition, a theoretically unrelated marker variable was included in the survey instrument; its correlations with substantive constructs were nonsignificant, further reducing concerns regarding method bias. Collectively, these results suggest that common method bias does not materially threaten the validity of the findings.

Figure 3. Analytical methods
Source: authors' work based on literature, January 2026

Analytical Strategy

Data were analyzed using SmartPLS 4 and R (lavaan), depending on model requirements. The analysis proceeded in several stages. First, the measurement model was assessed through reliability (Cronbach’s alpha and composite reliability), convergent validity (average variance extracted), and discriminant validity (HTMT ratios). Second, the structural model was evaluated using path coefficients (β), explained variance (R²), predictive relevance (Q²), and bootstrapping procedures with 5,000 resamples.

Moderation effects were examined through interaction terms between AI exposure and algorithmic transparency. Mediation effects were assessed by testing indirect pathways through ethical governance using bootstrapped confidence intervals. Finally, experimental conditions were compared using ANOVA and Tukey post-hoc tests to determine differences across decision-making scenarios.

Results

Qualitative Results

The qualitative analysis, built from experimental vignettes and exploratory interviews, highlights an ambivalent perception of the use of generative AI in decisions related to human resources management. Participants describe a delicate balance between perceived objectivity and the risk of technological injustice. They believe that AI can increase decision-making fairness when its criteria are made explicit and when human oversight remains active. However, as soon as decision-making mechanisms become opaque, perceptions are reversed, trust decreases and judgments of procedural injustice increase.

This tension appears in a recurrent way through the verbatims. When automated decision-making is accompanied by understandable justifications, several respondents say that the machine represents a "guarantee of consistency". As one participant put it: "When I understand the criteria, even if it's an algorithm, I feel like the decision is more objective. At least I can know how the system calculated." This finding is in line with the work of Shin and Park (2021), who demonstrate that algorithmic comprehensibility is a major determinant of procedural justice.

Conversely, in scenarios of total opacity, reactions show increased concern: "We don't know what AI is based on. It looks like an arbitrary judgment, but it doesn't say its name." This perception is in line with Martin et al. (2022)4, who observe that a deficit in explainability leads to a drop in legitimacy and organizational acceptability.

The thematic analysis reveals three central conceptual categories: perception of control (ability to understand or contest the decision), perceived reliability (rigour and stability of the criteria) and responsibility (clear identification of the responsible actor). These dimensions structure the way AI is interpreted in HR processes (Table 5).

Emerging concept

Synthetic definition

Representative excerpt

Perception of control

Power to understand or question an algorithmic decision

"If I can see the criteria, I still feel involved."

Perceived reliability

Perception that AI is enforcing consistent and stable rules

"AI seems less emotional than humans; It's reassuring. »

Liability

Clarity on who is responsible in the event of an error

"Who is responsible if the algorithm is wrong? No one answers. »

Table 5. Emerging concepts from thematic analysis

One of the most striking paradoxes concerns the simultaneity of a perception of increased objectivity and an amplified risk of systemic injustice. For example, as one participant put it: "AI suppresses personal preferences, but it can amplify hidden injustices in the data. It's like a justice system that can turn against those it wants to protect." This paradox, also observed by Binns et al. (2018)28, illustrates the risk of large-scale replication of invisible biases.

Finally, the presence of human control clearly improves the perception of justice. Hybrid scenarios generate a high level of trust: "When a manager validates the AI's decision, I know that there is someone behind to make up for the mistakes." This co-governance, also highlighted by Lee and Singh (2022)30, is a decisive lever for acceptability (Figure 4).

 

Figure 4. Conceptual Map of Perceptions of Algorithmic Justice

These qualitative results form the interpretative basis of the quantitative model tested in the next step.

Quantitative Results

The quantitative analyses are based on a sample of 412 employees and managers from various sectors (technology, services, public administration). The sample is sufficiently diverse to allow robust estimates in SEM.

Variable

Percentage / Average

Age (average)

36.8 years (SD = 9.4)

Women

48,3 %

Managers

29,1 %

Experience with AI

57,6 %

Various sectors

7 sectors represented

Table 6. Sample characteristics (n = 412)

The sample is composed of 412 respondents with significant demographic and professional diversity. The average age is 36.8 years (SD = 9.4), reflecting a mid-career labour force. Women make up 48.3% of the sample, indicating a relatively balanced gender distribution. Nearly 29.1% of participants are in management positions, which ensures adequate representation of decision-making profiles. In addition, 57.6% say they have prior experience with artificial intelligence tools, a key element in interpreting attitudes towards generative AI. Finally, the sample comes from seven various sectors of activity, reinforcing professional diversity and the generalizability of the results.

Descriptive statistics and correlations

The analyses reveal significant relationships between perceived justice, algorithmic transparency, trust, acceptability and intention to use.

 

Variable

M

SD

1

2

3

4

1. Algorithmic transparency

3.8

0.7

-

 

 

 

2. Perceived procedural justice

3.7

0.7

.51***

-

 

 

3. Trust in AI

3.4

0.8

.46***

.55***

-

 

4. Organizational acceptability

3.5

0.7

.39***

.48***

.61***

-

Table 7. Pearson's Descriptive Statistics and Correlations
p < 0.001

These correlations confirm the central importance of transparency and procedural justice in the acceptability of AI in the HR context.

Reliability and validity

The model presents satisfactory psychometric indices.

Construct

Alpha

CR

AVE

Transparency IA

0.84

0.9

0.62

Procedural justice

0.86

0.9

0.64

Trust

0.88

0.9

0.66

Acceptability

0.83

0.9

0.59

Table 8. Internal reliability and convergent validity

The recommended thresholds34 are respected: α > 0.80, CR > 0.85, AVE > 0.50.

Structural Model Analysis (SEM)

The standardised coefficients confirm the initial assumptions.

 

Figure 5. Validated structural model (standardised coefficients)

 

Standardized coefficients indicate meaningful and consistent relationships within the validated model. Transparency has a moderate but robust direct effect on procedural justice (β = 0.47), showing that the clarity of algorithmic processes feeds the perception of fairness. In turn, procedural justice strongly influences trust (β = 0.52), confirming that perceived fair treatment is the main determinant of trust in algorithmic systems.

Trust significantly increases acceptability (β = 0.49), highlighting its pivotal role in the adoption of AI-assisted decisions. The direct effect of transparency on acceptability (β = 0.19) is smaller but significant, illustrating a partial influence that is mainly mediated by procedural justice and trust. The model thus highlights a solid causal chain: transparency → justice → trust → acceptability, consistent with theories of procedural legitimacy and technological adoption.

To conclude, the model explains 61% of the variance in trust and 58% of organizational acceptability.

Complementary Regressions

Regression analyses show that human supervision positively moderates the relationship between perceived justice and acceptability (β_interaction = 0.22, p < 0.05). Thus, when human supervision is high, the positive effects of procedural justice are magnified.

Discussion

Our results demonstrate that algorithmic transparency is the main lever of perceived justice in HR decisions integrating generative AI. This observation is in line with the analyses of OBVIA (2023)18, for which the intelligibility of systems is the first condition of social acceptability. However, contrary to some technocentric positions that assume that the objectivity of AI is enough to build trust,41 our results show that the mere presence of an algorithm does not guarantee a sense of fairness. For the participants, AI is only right when it is understandable, explainable and accompanied by active human supervision.

Compared to the SHRM guidelines (2022), which emphasize automated decision-making performance, respondents' perceptions place more importance on procedural justice than technical efficiency. Employees are not afraid of AI as such; They fear the opacity of the criteria, the dilution of responsibilities and the risk of systematic errors. On the other hand, when the decision is co-governed by a manager and an algorithmic system, perceptions of fairness and legitimacy increase sharply, which confirms the findings of the CIPD (2023)42 on the need for a permanent "human-in-the-loop" in HR processes.

Our results also demonstrate that procedural justice is a key mechanism in building trust in AI systems. This relationship, well documented in the work of Colquitt et al. (2020)37, takes on a new dimension here: justice no longer depends solely on the coherence of human procedures, but on the socio-technical quality of hybrid systems. For participants, an algorithmic decision can be perceived as more objective and stable, but only if it remains embedded in an explicit governance framework where criteria, responsibilities and boundaries are clearly defined.

Contrary to the widespread idea that AI systematically reduces human bias, our results show that algorithmic biases are more feared than human errors. Respondents consider that biases introduced by data can be invisible, persistent and difficult to challenge. This perception aligns with the critical analyses of Binns et al. (2018)28, who evoke the "algorithmic massification of injustice". On the other hand, the participants see in human intervention an opportunity to recontextualize the decision, interpret the particular case and correct errors. This shows that trust is based less on the human or artificial nature of the decision-maker than on the clarity of the articulation between the two.

Finally, our findings demonstrate that the strategic role of HR now extends beyond talent management and compliance. In an environment where decisions are partially or fully automated, HR managers become the guarantors of ethics, transparency and decision-making legitimacy. This is in line with the OECD (2021)43 and UNESCO (2023)44 normative frameworks, which define HR professionals as the "primary human AI governance leaders". HR can no longer limit itself to integrating tools; They must establish explainability mechanisms, contestation protocols, bias audits and mediation spaces.

In any case, this study shows that the perceptions of employees and managers converge towards the same requirement: AI can contribute to organizational justice, but only within transparent, supervised and governed systems in a responsible manner. Generative AI is therefore not a substitute for the HR function; it amplifies ethical responsibility and reinforces the need for leadership that is sensitive to issues of justice and trust.

Contributions

Theoretical Contributions

This study proposes a novel integrative model articulating generative AI, organizational justice and the ethical governance of decisions in human resources management. Our results demonstrate that procedural justice is a central mechanism for legitimizing AI systems, confirming some recent work37 while revealing a theoretical blind spot: the way in which the technological, cognitive and institutional dimensions are concretely articulated in HR systems. Unlike strictly technological approaches that attribute presumed neutrality to AI,41 our model shows that perceived fairness depends on the interplay between transparency, human control, and accountability. This theoretical contribution therefore makes it possible to go beyond the simplistic dichotomies opposing humans and machines by conceptualizing justice as a socio-technical product, built by algorithms, data and organizational actors. It also paves the way for a renewed understanding of organizational trust, now based on hybrid decision-making architectures.

Practical Contributions

Practical Contributions: Operational Steps for Integrating AI into Ethical Governance Frameworks

From a managerial perspective, this research moves beyond abstract ethical principles and provides a structured implementation pathway for embedding AI-based decisions within ethical governance systems. Our findings demonstrate that employee acceptance of AI-supported HR decisions is primarily driven by perceived procedural justice, transparency, and ethical oversight. In other words, legitimacy is not a technological outcome; it is a governance outcome.

Based on these results, we propose a five-step integration process to operationalize ethical AI governance in HR decision-making contexts such as recruitment, performance evaluation, promotion, and workforce planning.

Step 1: Define the Governance Architecture Before Deployment

Before implementing any AI system, organizations must establish a governance framework clarifying roles, responsibilities, and escalation mechanisms. Our findings show that perceived accountability significantly enhances trust and reduces resistance. Therefore, firms should formally assign decision responsibility to identifiable human actors and map AI decision pathways. AI should assist, not replace, accountable authority.

This step ensures that governance precedes automation rather than follows it.

Step 2: Embed Explainability Mechanisms into Decision Workflows

Results indicate that algorithmic transparency moderate fairness perceptions. Employees are more likely to accept AI decisions when they understand how conclusions are reached. Organizations should therefore require minimum explainability standards for all AI-supported decisions, particularly in high-stakes HR contexts.

Practically, this includes:

  • Providing simplified explanation interfaces for affected individuals,
  • Documenting decision criteria,
  • Ensuring traceability of input variables.
  • Explainability reduces uncertainty, which in turn supports psychological safety.

Step 3: Institutionalize Structured Human Oversight

Our findings suggest that hybrid decision models generate higher justice perceptions than opaque automated systems. Organizations should therefore implement formal human review checkpoints for sensitive decisions, such as dismissals, promotion denials, or disciplinary actions.

Human oversight should not be symbolic. It must involve:

  • The authority to override algorithmic recommendations,
  • Review of contextual factors not captured by data,
  • Formal documentation of human jugement.

This step mitigates automation bias and protects organizational legitimacy.

Step 4: Establish Contestability and Feedback Mechanisms

Procedural justice significantly predicts acceptance and well-being outcomes. To reinforce this dimension, organizations must operationalize contestability. Employees and candidates should have access to clear appeal processes, transparent review channels, and response timelines.

This transforms AI governance from a unilateral process into a participative system, reinforcing organizational trust.

Step 5: Implement Continuous Ethical Monitoring and Bias Audits

Our results highlight the importance of perceived ethical governance as a mediator between AI use and sustainable performance. Ethical governance cannot be static; it must be iterative. Organizations should therefore conduct periodic bias audits, dataset reviews, and model performance evaluations.

In addition, HR professionals should receive AI literacy training to ensure they understand the capabilities, limits, and risks of algorithmic systems. AI governance must become an organizational competency embedded in leadership practices rather than delegated exclusively to IT departments.

Linking the Steps to Sustainable Performance

The integration of these five steps directly connects to our empirical model. Organizational agility alone does not guarantee positive outcomes. It is the combination of agility, perceived fairness, and ethical governance that supports psychological health and long-term sustainable performance.

Organizations that fail to embed AI within structured governance frameworks risk eroding trust, increasing psychosocial strain, and exposing themselves to legal and reputational risks. Conversely, firms that operationalize ethical safeguards are more likely to enhance employee resilience, strengthen institutional legitimacy, and maintain sustainable competitive advantage.

The central managerial implication is therefore clear: ethical AI integration is not a compliance exercise but a strategic governance process that directly influences performance trajectories.

Political and Normative Contributions

At the institutional level, this study offers operational guidance for policymakers, regulatory authorities, and professional HR bodies seeking to move from principled declarations to enforceable governance mechanisms for AI in employment contexts.

Although international organizations such as the OECD (2021)43, the ILO (2022)45, UNESCO (2023)44, and OBVIA (2024)18 advocate fairness, transparency, and human-centered AI, implementation frameworks frequently remain normative rather than operational. Our findings demonstrate that perceptions of procedural justice, transparency, and ethical governance significantly shape acceptance and psychological outcomes. These empirical results provide a basis for translating ethical principles into regulatory instruments.

First, minimum explainability standards should be codified in labor and digital governance regulations. Employees and candidates must have a legally protected right to receive meaningful explanations regarding algorithmic decisions affecting recruitment, evaluation, promotion, or termination. Such requirements operationalize procedural justice and reduce asymmetries of information.

Second, traceability obligations should be institutionalized. Organizations using AI in HR processes should be required to document training datasets, model logic, update histories, and performance metrics. This documentation should be accessible for regulatory inspection and internal audit purposes. Traceability ensures accountability and facilitates corrective action when unfair outcomes emerge.

Third, independent algorithmic bias audits should be mandated for high-impact employment systems. Regular third-party assessments can detect systemic discrimination, reduce litigation risks, and strengthen institutional credibility. Our results suggest that perceived ethical governance mediates the relationship between AI use and sustainable performance; therefore, regulatory frameworks that require audits directly reinforce organizational legitimacy.

Fourth, democratic oversight structures should be encouraged, particularly in public sector institutions and large enterprises. Worker representatives, ethics committees, or cross-functional AI governance boards should be involved in oversight processes. This participatory dimension strengthens social legitimacy and aligns AI governance with industrial relations traditions.

Fifth, formal rights to contest automated decisions must be embedded in regulatory texts and organizational policies. Accessible appeal mechanisms, defined response timelines, and independent review pathways should be mandatory. Contestability enhances perceptions of procedural fairness and reduces psychosocial strain associated with opaque automation.

Importantly, these measures are not solely protective safeguards; they function as structural drivers of sustainable digital transformation. As our empirical model indicates, the alignment between organizational agility, perceived algorithmic justice, and ethical governance is essential for maintaining trust, psychological well-being, and long-term performance stability.

Policymakers who neglect the governance dimension of AI risk fostering environments characterized by opacity, distrust, and social resistance. Conversely, regulatory frameworks that embed transparency, accountability, and oversight can transform AI adoption into a source of institutional resilience and competitive sustainability.

Ultimately, this research underscores that responsible AI in HR is not only a technological issue but a matter of institutional design. Sustainable digital transformation depends on coherent integration between agile organizational practices, fair algorithmic systems, and enforceable governance standards.

Limitations and Avenues of Research

Limitations of the Study

Like all empirical research, this study has limitations. The first concerns the generalizability of the results. The use of a sample composed mainly of participants from the tertiary sectors limits the scope of the findings to organizational environments characterized by a relatively high level of technological readiness. Unlike industrial or manufacturing contexts, where interactions with AI differ substantially, the perceptions studied here could vary significantly. In addition, experimental vignettes, although effective in isolating cognitive mechanisms, necessarily simplify the complexity of real work situations. Finally, the responses are partly based on subjective statements, exposed to possible social desirability biases.

Future Avenues of Research

Several avenues are emerging to deepen knowledge about generative AI and organizational justice. Longitudinal studies would allow us to observe the evolution of perceptions as organizations integrate or institutionalize AI into their HR processes. A multi-sectoral analysis comparing services, industry and public administration would also offer a more detailed understanding of contextual variations. Future research could integrate big data (HR Big Data), by cross-referencing decision logs, digital traces and metadata, in order to understand how biases, emerge and spread in algorithmic systems. It would also be relevant to examine the mechanisms of psychological mediation, such as computational trust or technological dissonance, as well as the moderating effects of national or organizational cultures. Finally, the integration of participatory approaches, involving employees and managers in the co-design of tools, would be a promising way to reconcile technological innovation and social justice.46-48

Conclusion

Our research has demonstrated that generative AI can only strengthen organizational justice when it is embedded in a governance framework that is transparent, explainable, and overseen by HR professionals. Contrary to the still widespread hypothesis that AI mechanically produces more objective decisions, our results show that perceived fairness depends less on the technical performance of models than on the way they are contextualized, explained and controlled. One participant put it eloquently: "AI is not unfair in itself; it is his silence that creates injustice." This sentence sums up the requirement of intelligibility that runs through all qualitative verbatims.

Quantitatively, the results extend and nuance the recent literature. Algorithmic transparency significantly influences procedural justice (β = 0.47, p < 0.001), which confirms the importance of explainability highlighted by Shin and Park (2021). On the other hand, our results contrast with the more techno-optimistic positions (Mitchell & Bryson, 2021), according to which computational neutrality would constitute a sufficient guarantee of legitimacy. On the contrary, the data reveal that trust in AI is not based solely on its analytical capabilities, but on the coherence of the socio-technical mechanisms that surround it. Thus, procedural justice significantly increases trust in the system (β = 0.52, p < 0.001) and organizational acceptability (β = 0.49, p < 0.001), confirming that ethical governance is the pivot of the adoption of AI in the workplace.

The qualitative and quantitative results also converge on a central point: the human presence reinforces decision-making legitimacy. As a verbatim document illustrates: "When a manager validates the AI's decision, I know that there is someone behind me to understand my reality." This observation justifies the strategic role of HR as guarantors of ethics and responsibility. Contrary to the idea that automation reduces the role of managers, our data shows that HR is becoming more essential, both to contextualize decisions and to establish mechanisms for contestation and feedback.

This study therefore contributes to renewing the debate on AI in the context of human resources management. She proposes an integrative model articulating generative AI, organizational justice and ethical governance, while showing that justice is not only the result of algorithms, but of the dynamic interaction between technology, institutional rules and human agents. Our findings extend the normative recommendations of the OECD, ILO and UNESCO by highlighting the importance of a robust human-in-the-loop, not as a mere technical requirement, but as a democratic imperative.

In short, AI does not pose a threat to organizational justice, provided it is governed. It can produce more consistent, stable, and potentially fairer decisions, but only when organizations put in place transparent, explicit, and accountable mechanisms. This work paves the way for new longitudinal, multi-sectoral, data-driven research to understand how these dynamics are evolving as AI is institutionalized in contemporary HR practices.

Acknowledgements

None

Finding

The author received no financial support for the research, authorship, and/or publication of this article.

Conflict of Interest

The author declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. O’Neil C. Weapons of math destruction. Crown; 2016.
  2. Manish R, Solon B, Jon K, et al. Mitigating bias in algorithmic hiring. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency. 2020; pp. 469-481. 
  3. Lepri B, Oliver N, Letouzé E, et al. Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology. 2018;31(4):611–627.
  4. Martin K, Shilton K, Smith J. Ethical governance of AI systems. Business Ethics Quarterly, 2022;32(2):245–271.
  5. Meijerink J, Keegan A. Conceptualizing human resource management in the gig economy: Toward a platform ecosystem perspective. Journal of Managerial Psychology. 2019;34(4):214–232. 
  6. Mignenan V, Moussa MA, Mahamat YA. Quand l’intégration stratégique de la responsabilité sociale devient levier des pratiques contemporaines de la GRH. African Scientific Journal. 2025;3(33):1115.
  7. Greenberg J. Organizational justice: The dynamics of fairness in the workplace. In: Zedeck S (Ed.), APA handbook of industrial and organizational psychology. 2011;3:271-327.
  8. Moore PV. Data, algorithms and the future of work. Work, Employment and Society. 2023;37(2):387–404.
  9. Davenport TH, Miller S. Generative AI for business value. MIT Sloan Management Review. 2023;64(3):1-10.
  10. Strohmeier S, Piazza F. Artificial Intelligence Techniques in Human Resource Management—A Conceptual Exploration. Intelligent Techniques in Engineering Management. 2015;32(1):100820.
  11. Meijerink J, Bondarouk T, Lepak D. AI as co-decision-maker in HRM. Human Resource Management Journal. 2023.
  12. Mignenan V. Proposition d’un modèle de construction du capital humain en milieu organisationnel. Ad Machina. 2020;4:110-134.
  13. Manning L, Zhenhui J, Guanghui Ma. Algorithmic decision-making and fairness in organizations. Information & Management, 2023;60(2):103749.
  14. Mignenan V. Proposal of a model for building human capital in an organizational environment. Journal of Organizational Psychology. 2021;21(4):72–92.
  15. Colquitt JA, LePine JA, Piccolo RF, et al. Explaining the justice–performance relationship: Trust as exchange deepener or trust as uncertainty reducer? Journal of Applied Psychology. 2012;97(1):1-15.
  16. OECD. AI Principles. 2019.
  17. UNESCO. Recommendation on the Ethics of Artificial Intelligence. 2021.
  18. OBVIA. Operational AI governance framework. International Observatory on the Societal Impacts of AI. 2022.
  19. Weick KE. Sensemaking in organizational agility. Organization Studies, 2021;42(9):1347-1365.
  20. Denison DR, Hooijberg R, Quinn RE. Paradox and performance in agile organizations. Organizational Dynamics. 2023;52(2):100934.
  21. Nielsen K, Noblet A. Organizational interventions for psychological health. Annual Review of Organizational Psychology and Organizational Behavior. 2023;10:213–239.
  22. Parent-Lamarche A, Marchand A. Well-being at work from a multilevel perspective: What is the role of personality traits? International Journal of Workplace Health Management. 2019;12(5):298–317.
  23. Leclerc C, Gagné M. Digital intensification of work and psychological strain. Human Relations, 2021;74(12):2034–2057.
  24. Mignenan V, Moussa Mahamat A, Djabre GM. Recrutement stratégique, capital humain et performance publique : une analyse institutionnelle. African Scientific Journal. 2026;3(33):1442.
  25. Aguinis H, Solarino AM. Advancing sustainable performance research. Academy of Management Annals, 2023;17(1):1–35.
  26. Côté S, Piezunka H. Sustainable performance in digitally transforming organizations. Organization Science, 2022;33(4):1302–1321.
  27. Law E, Schafer B, Binns R. Explainable AI and procedural justice. AI & Society, 2022;37:1345-1358.
  28. Binns R, Veale M, Van Kleek M, et al. Like trainer, like bot? Inheritance of bias in algorithmic content moderation. ACM Conference on Fairness, Accountability and Transparency (FAT*); 2018. 
  29. Floridi L, Taddeo M. What is data ethics? Philosophical Transactions of the Royal Society A, 2020;374(2083):20160360.
  30. Lee MK, Singh J. The algorithmic management paradox. Academy of Management Review. 2022;47(3):486–503. 
  31. Johnson RB, Onwuegbuzie AJ, Turner LA. Toward a Definition of Mixed Methods Research. Journal of Mixed Methods Research. 2007;1(2):112-133.
  32. Venkatesh V, Brown SA, Bala H. Bridging the Qualitative–Quantitative Divide: Guidelines for Conducting Mixed Methods Research in Information Systems. MIS Quarterly. 2013;37(1) :21-54.
  33. Aguinis H, Bradley KJ. Best practice recommendations for designing and implementing experimental vignette methodology studies. Organizational Research Methods. 2014;17(4):351-371.
  34. Hair JF, Hult GTM, Ringle CM, et al. A primer on partial least squares structural equation modeling (PLS-SEM) (3rd ed.). Sage; 2022.
  35. Podsakoff PM, MacKenzie SB, Podsakoff NPSources of method bias in social science research. Annual Review of Psychology, 2012;63:539–569.
  36. Colquitt JA.On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 2001;86(3):386–400.
  37. Colquitt JA, Zipay KP, Lynch JW, et al. Bringing justice to the field of organizational trust. Academy of Management Annals, 2020;14(2):627–656.
  38. Krafft M, Arden CM, Verhoef PCPermission Marketing and Privacy Concerns — Why Do Customers (Not) Grant Permissions? Journal of the Academy of Marketing Science. 2022;50(2):305-321.
  39. Venkatesh V, Thong JYL, Xu X. Unified theory of acceptance and use of technology (UTAUT2). MIS Quarterly, 2016;40(2) :273–315.
  40. Kock N. Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal of e-Collaboration. 2015;11(4):1-10.
  41. Mitchell R, Bryson J. AI governance and legitimacy. AI & Society, 2021;36(4):1231-1243. 
  42. CIPD. People professionals and AI: Ensuring responsible use in HR. Chartered Institute of Personnel and Development. 2023.
  43. OECD. The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers. 2021.
  44. UNESCO. AI governance and human rights report. 2023.
  45. ILO. World Employment and Social Outlook: The role of digital labour platforms in transforming the world of work. International Labour Organization; 2022.
  46. Church AH, Silzer R. Are We on the Same Wavelength? Four Steps for Moving From Talent Signals to Valid Talent Management Applications. Industrial and Organizational Psychology. 2016;9(3):645-654.
  47. Jihyun K, Jinyoung K, Collins C. First impressions in 280 characters or less: Sharing life on Twitter and the mediating role of social presence. Explainable AI and trust. Telematics and Informatics. 2021;61:101596.
  48. Victor M, Élie N. HR Analytics and Skills Transformation: Towards a Predictive Model for Human Capital Development in the Age of AI. International Journal of Multidisciplinary on Science and Management. 2025;2(4):243-256.