April 1, 2025
People who hide their use of AI have their work taken more seriously
New research led by Professor David Restrepo Amariles from business school HEC Paris claims to uncover challenges in the adoption of AI tools, particularly the phenomenon of “shadow adoption,” where employees use generative technology like ChatGPT without disclosing it. The research suggests that employees who conceal their use of artificial intelligence may receive better evaluations, which the report claims may be because firms struggle with trust issues and misaligned incentives.
Conducted with 130 mid-level managers at a major consulting firm, the study claims to have uncovered a key finding: while content produced with the assistance of ChatGPT was evaluated more favourably by managers, the effort behind that content was often undervalued when the use of the tech was disclosed. Conversely, analysts who concealed their use of the technology tended to receive more positive evaluations, suggesting that shadow adoption may benefit them professionally. This phenomenon raises concerns about fairness and the effectiveness of oversight in the adoption of generative artificial intelligence.
Managers also faced challenges in identifying when tools were used unless they were explicitly informed. When the use of ChatGPT was disclosed, 44 percent of managers still suspected that it had been used, even when it had not, highlighting a trust gap between employees and management. This misalignment creates an imbalance in accountability and evaluation, with analysts benefiting from undisclosed AI use while managers misjudge the effort involved.
Professor Restrepo’s research proposes a solution to these challenges, suggesting that firms establish clear policies regarding AI use. The research recommends that companies implement mandatory disclosure of AI tools, introduce a framework for risk-sharing between managers and employees, and establish mechanisms for monitoring AI usage. Furthermore, it suggests creating incentive systems to ensure that employees’ efforts are fairly recognised while encouraging transparency in AI adoption.
“Our research demonstrates that AI adoption in consulting firms depends not only on technological capabilities, but also on managerial experience and structured policy frameworks,” said Professor Restrepo. “Successful integration of AI tools like ChatGPT requires not only transparency, but also fair recognition of human effort and well-balanced incentives.”
This research is the first to apply agency theory to AI adoption, showing that without structured policies, firms may unintentionally reward secrecy over transparency.