March 16, 2018
Recruitment via artificial intelligence must be monitored to avoid adopting human bias
Artificial intelligence systems need to be accountable for human bias at AI becomes more prevalent in recruitment and selection, attendees at the Employers Network for Equality & Inclusion’s annual conference have been warned. Hosted by NatWest, the conference, Diversity & Inclusion: The Changing Landscape heard from experts in ethics, psychology and computing. They explained that AIs learnt from existing data, and highlighted how information such as performance review scores and employee grading was being fed in to machines after being subjected to human unconscious bias. Dr David Snelling, the programme director for artificial intelligence at technology giant Fujitsu, illustrated how artificial intelligence is taught through human feedback. Describing how huge data sets were fed into the program, David explained that humans corrected the AI when it used that data to come to an incorrect conclusion, using this feedback to teach the AI to work correctly. However, as this feedback is subject to human error and bias, this can become embedded in the machine.
March 6, 2018
How to reboot an activity based working project that has ground to a halt
by Karin Stahl • Comment, Flexible working, Technology, Workplace design