April 25, 2018
Nearly half of London Law firms are already utilising AI
There have already been warnings from workplace experts that the legal profession isn’t one to choose for those starting out on their careers as it’s ripe for automation, and a new survey claims these changes are happening fast. According to a survey of over a 100 law firms by CBRE, nearly half (48 percent) are already utilising Artificial Intelligence (AI) and a further 41 percent have imminent plans to do so. Of the firms already employing AI, 63 percent of firms are using it for legal document generation and review, and the same proportion for e-discovery. Due diligence (47 percent) and research (42 percent) were also common applications, along with compliance and administrative legal support (each 32 percent). The use of AI will affect employment levels, with the greatest impact predicted at the junior and support levels, where nearly half (45 percent) of firms believing that there will be a reduction in headcount. In contrast, only 7 percent of firms believe that senior headcount levels will be reduced.







Robots will not as feared steal people’s jobs and will eventually improve productivity, but they will undercut workers’ contribution sufficiently to depress their wages. According to the third report in Barclays Impact Series, titled 








Over half (52 percent) of workers in a new poll have admitted looking for a new job because of frustrations over what they see as outdated ways of thinking around work practices and automation at their current company. The 
Artificial intelligence systems need to be accountable for human bias at AI becomes more prevalent in recruitment and selection, attendees at the Employers Network for Equality & Inclusion’s annual conference have been warned. Hosted by NatWest, the conference, Diversity & Inclusion: The Changing Landscape heard from experts in ethics, psychology and computing. They explained that AIs learnt from existing data, and highlighted how information such as performance review scores and employee grading was being fed in to machines after being subjected to human unconscious bias. Dr David Snelling, the programme director for artificial intelligence at technology giant Fujitsu, illustrated how artificial intelligence is taught through human feedback. Describing how huge data sets were fed into the program, David explained that humans corrected the AI when it used that data to come to an incorrect conclusion, using this feedback to teach the AI to work correctly. However, as this feedback is subject to human error and bias, this can become embedded in the machine.




In a workplace dominated by insecurity, gig work and intelligent machines we need to improve our understanding of their potential impact on health, safety and wellbeing claims a new report. 


