The European Commission recently announced its proposal for the regulation of artificial intelligence, looking to ban “unacceptable” uses of artificial intelligence. Up until now, the challenges for businesses getting AI ‘wrong’ were bad press, reputation damage, loss of trust and market share, and most importantly for sensitive applications, harm to individuals. But with these new rules, two new consequences are arising: plain interdiction of certain AI systems, and GDPR-like fines.
While for now this is only proposed for the EU, the definitions and principles set out may have wider-reaching implications, not only on how AI is perceived but also on how businesses should handle and work with AI. The new regulation sets four levels of risk: unacceptable, high, low, minimum, with HR AI systems sitting in the “High Risk” category.
The use of AI for hiring and firing has already stirred up some controversy, with Uber and Uber Eats among the latest companies to have made headlines for AI unfairly dismissing employees. It is precisely due to the far reaching impact of some HR AI applications, that it has been categorised as high risk. Afterall, a key purpose of the proposal is to ensure that fundamental human rights are upheld.
Yet, despite the bumps on the road and the focus on the concerns, it needs to be remembered that AI is in fact the best means for helping remove discrimination and bias – if the AI is ethical. Continue to replicate the same traditional approaches and processes as found in existing data, and we’ll definitely repeat the same discriminations, even unconsciously. Incorporate ethical and regulatory considerations into the development of AI systems, and I’m convinced we will make a great step forward. We need to remember that the challenges lie with how AI is developed and used, not the actual technology itself. This is precisely the issue the EU proposal is looking to address.
AI, let alone ethical AI, is still not fully understood and there is an important education piece that needs to be undertaken. From the data engineers and data scientists to those in HR using the technology, the purpose, how and why the AI is being used must be understood to ensure it is being used as intended. HR also needs a level of comprehension of the algorithm itself to identify if those intentions are not being followed.
Defining the very notion of what is ‘ethical’ is not that simple, but regulations like the one proposed by the EU, codes of conducts, data charts and certifications will help us move towards generally shared concepts of what is and isn’t acceptable, helping to create ethical frameworks for the application of AI – and ultimately, greater trust.
These are no minor challenges, but the HR field has an unique opportunity to lead the effort and prove that ethical AI is possible, for the greater good of organisations and individuals.
May 18, 2021
New artificial intelligence regulations have important implications for the workplace
by Jose Alberto Rodriguez Ruiz • Comment, Technology, Workplace
The European Commission recently announced its proposal for the regulation of artificial intelligence, looking to ban “unacceptable” uses of artificial intelligence. Up until now, the challenges for businesses getting AI ‘wrong’ were bad press, reputation damage, loss of trust and market share, and most importantly for sensitive applications, harm to individuals. But with these new rules, two new consequences are arising: plain interdiction of certain AI systems, and GDPR-like fines.
While for now this is only proposed for the EU, the definitions and principles set out may have wider-reaching implications, not only on how AI is perceived but also on how businesses should handle and work with AI. The new regulation sets four levels of risk: unacceptable, high, low, minimum, with HR AI systems sitting in the “High Risk” category.
The use of AI for hiring and firing has already stirred up some controversy, with Uber and Uber Eats among the latest companies to have made headlines for AI unfairly dismissing employees. It is precisely due to the far reaching impact of some HR AI applications, that it has been categorised as high risk. Afterall, a key purpose of the proposal is to ensure that fundamental human rights are upheld.
Yet, despite the bumps on the road and the focus on the concerns, it needs to be remembered that AI is in fact the best means for helping remove discrimination and bias – if the AI is ethical. Continue to replicate the same traditional approaches and processes as found in existing data, and we’ll definitely repeat the same discriminations, even unconsciously. Incorporate ethical and regulatory considerations into the development of AI systems, and I’m convinced we will make a great step forward. We need to remember that the challenges lie with how AI is developed and used, not the actual technology itself. This is precisely the issue the EU proposal is looking to address.
AI, let alone ethical AI, is still not fully understood and there is an important education piece that needs to be undertaken. From the data engineers and data scientists to those in HR using the technology, the purpose, how and why the AI is being used must be understood to ensure it is being used as intended. HR also needs a level of comprehension of the algorithm itself to identify if those intentions are not being followed.
Defining the very notion of what is ‘ethical’ is not that simple, but regulations like the one proposed by the EU, codes of conducts, data charts and certifications will help us move towards generally shared concepts of what is and isn’t acceptable, helping to create ethical frameworks for the application of AI – and ultimately, greater trust.
These are no minor challenges, but the HR field has an unique opportunity to lead the effort and prove that ethical AI is possible, for the greater good of organisations and individuals.
José Alberto Rodriguez Ruiz is Chief Data Protection Officer at Cornerstone OnDemand