AI – and most tech development – is undertaken by technologists who are generally male of a certain age, socio-demographic profile and without perhaps the broadest view of the world. This lack of diversity manifests itself in the products produced - Siri & Alexa have default female names, voices and personas and are seen as helpful or passive supporters of a user's lifecycle. This contrasts with IBM's Watson or Salesforce's Einstein, which are both seen as complex problem-solvers tackling big global issues.
Surely the fastest way to flip this perception is to render AI genderless, as Sage have done with their finance PA, called Pegg.
But the longer term approach requires more effort to expand the breadth of people working in technology & AI development. Much harder to achieve, but surely worth the effort?
Eminent industry leaders worry that the biggest risk tied to artificial intelligence is the militaristic downfall of humanity. But there’s a smaller community of people committed to addressing two more tangible risks: AI created with harmful biases built into its core, and AI that does not reflect the diversity of the users it serves. I am proud to be part of the second group of concerned practitioners. And I would argue that not addressing the issues of bias and diversity could lead to a different kind of weaponized AI.