Two thirds of bosses think people should ask permission before using AI at work

68 percent of business leaders think it’s unethical for employees to use AI at work without the permission of a managerA new survey commissioned by claims that 68 percent of business leaders think it’s unethical for employees to use AI at work without the permission of a manager. The firms believes that the  rise of generative AI tools has emphasised the need for complex ethical AI frameworks to govern its application in the workplace. Without these ethical frameworks, the technology risks threatening human roles and intellectual property in morally dubious and potentially harmful ways.

With new ethical questions relating to AI’s usage, the firm surveyed a group of business leaders on the use of AI tools in the workplace. When asked if they felt it ethical for employees to use AI tools such as ChatGPT without their employer’s permission, 68.5 percent said that employees shouldn’t be using AI at work without permission from an employer, manager, or supervisor.

The survey also suggests that AI ethics has divided business leaders on who should take responsibility for AI mistakes made in the workplace. Almost a third of respondents (31.9 percent) lay the blame solely on the employees operating the tool. Just over a quarter (26.1 percent), on the other hand, believe that all three parties – the AI tool, the employee, and the manager share some responsibility for the mistake.

A number of major US-based authorities have already started implementing AI ethical frameworks. In October 2022, the White House released a nonbinding blueprint for an AI Bill of Rights, designed to guide responsible use of AI in the US using five key principles. The United Nations has also outlined ten principles for governing the ethical use of AI within their inter-governmental system.

Meanwhile, major tech organization Microsoft has released six key principles to underline responsible AI usage. These principles include fairness, transparency, privacy and security, inclusiveness, accountability, reliability and safety.

A draft version of the EU’s new AI Act, which aims to promote safe and trustworthy AI development, has also recently been agreed upon and will now be negotiated by the Council of the European Union and EU member states.

Image: DALL-E