Google launches initiative to humanise artificial intelligence 0

Google has announced a new initiative that aims to improve the ways humans and artificial intelligence interact in both their personal and professional lives. Called the People + AI Research (PAIR) initiative, The programme will look to ensure that advances in machine learning and technical performance in areas such as speech recognition, image search and translation are better aligned with the needs of people. The project will bring together researchers across Google to study and redesign the ways people interact with AI systems. According to Google, the goal isn’t just to publish research but also create open source tools for experts in the field to use and completely redefine the way we think about artificial intelligence.

PAIR’s research is divided into three domains, based on different user needs and to address specific challenges:

Engineers and researchers:

  • How can AI aid and augment professionals in their work?
  • How might it support doctors, technicians, designers, farmers, and musicians as they increasingly use AI?

Domain experts:

  • How might we ensure machine learning is inclusive, so everyone can benefit from breakthroughs in AI?
  • Can design thinking open up entirely new AI applications? Can we democratise the technology behind AI?

Everyday users:

  • Instead of viewing AI purely as a technology, Google’s team will ask people to re-imagine it as a material to design with? For instance, advances in computer graphics meant more than better ways of drawing pictures—and that led to completely new kinds of interfaces and applications.

The PAIR team will be led by Google Brain researchers Fernanda Viégas and Martin Wattenberg and its 12 full-time staff members will also work with academics, such as Harvard University professor Brendan Meade and MIT professor Hal Abelson.

Two efforts already in the works are open-source tools that improve how datasets are viewed. Facets Overview and Facets Dive give developers details on their datasets that can help them see how their training data are falling short, which could lead to issues in AI bias. The effects of how training data can lead to biased AI were made clear in Microsoft’s racist and sexist Tay AI. More open source tools are sure to come out of the initiative as well as other pushes towards transparency. PAIR says on its website, “And we want to be as open as possible: we’re building open source tools that everyone can use, hosting public events, and supporting academics in advancing the state of the art.”