A report published recently by Martha Lane Fox’s Doteveryone think tank revealed that 59 per cent of tech workers have experience of working on products that they felt might be harmful for society, with more than a quarter of those feeling so strongly that they quit their job over it. This was particularly marked in relation to AI products. Separately, 63 per cent said they want more space in their week to devote to considering the impact of the tech they work on. The sample size was small, and might not have been representative, but the report is nonetheless instructive.
This connects to two recent trends. First, the rise of employee activism with regard to the social impact of Big Tech employers — from Amazon workers’ call for the company to deliver a climate change strategy (recently rejected by shareholders) to the #GoogleWalkout campaign protesting the search giant’s handling of sexual harassment, misconduct, transparency, and other workplace issues. Second, widespread concern over the implications of advances in AI — from the ethics of “everyday” applications such as “spying” voice assistants, liberty-busting facial recognition systems, and the perpetuation of entrenched biases by algorithms used for predictive policing or ostensibly fairer hiring, to the potential (propagated by science fiction cinema and philosophically-inspired online games) for AI systems to eventually bring about mankind’s downfall.
Emerging recently as a counterpoint to this scepticism and scaremongering — some of it no doubt justified; some of it more fanciful — has been the concept of “AI for good”, a more specific strand of the “tech for good” movement.
The “AI for good” trend has led to the creation of initiatives such as the Google AI Impact Challenge, identifying and supporting 20 startups creating positive social impact through the application of AI to challenges across a broad range of sectors, from healthcare to journalism, energy to communication, education to environmentalism. Stanford University launched its Institute for Human-Centred Artificial Intelligence to great fanfare. Meanwhile, at the GSB my colleague Jennifer Aaker has developed a course — Designing AI to Cultivate Human Well-being — that seeks to address the need for AI products, services, and businesses to have social good at their core.