• Artificial Intelligence
  • Ethics
  • Insight

Is AI adoption inevitable or even good?

By Josephine Young20 September 20194 min read

I recently had the privilege of presenting evidence to the All Party Parliamentary Group for Artificial Intelligence (APPG AI) on the topic of diversity. The APPG AI website has  the summary of my evidence and a recording of the evidence meeting.

Overall, my message to the APPG AI was that diversity fosters innovation in AI, and homogeneity hinders it. Diversity, in this context, encompasses demographic markers as well as skills and experience. Homogeneity blocks innovation because it makes new perspectives and ways of doing things less likely to be embraced, and so the opportunity for innovation decreases. This is no less the case for emerging technologies such as AI.

My second point was that we (as an industry) are still coming to terms with the types of tools and practices required to really harness the strengths of diverse teams. But also, we are still developing the rights tools to really empower teams to incorporate diverse and critical perspectives in how they build AI. So, it’s not just about having wonderfully diverse teams, it’s about having wonderfully diverse teams with the tools to enable them to incorporate all perspectives and think very critically about the impact their AI products will have on the world.

What I didn’t mention, but was brilliantly covered by the other panelists (and I wholeheartedly support), is that making sure we have diverse teams is also a social justice issue. If AI is to become embedded in every part of our lives, it needs to be built with input from every part of society.

Reflecting on my experience participating in the evidence meeting, I was struck that the panelists and members of the APPG AI all spoke as if AI adoption is inherently inevitable and also that inevitable adoption is inherently positive (myself included). Since then, I’ve kept wondering – is widespread AI adoption truly inevitable? Who gets the power to decide or to challenge AI adoption?

For example – AI capabilities like facial recognition and predictive algorithms are being deployed across many areas. A popular area is the justice system, with facial recognition being used to identify people on CCTV and predictive algorithms being used to assign risk of committing a crime to risk of reoffending . With these examples, it is government who is deciding how this is deployed and any challenges to this use would need to flow through elected MPs, the courts or well-organised citizen activism. This requires a decent level of understanding of what AI is, its strengths and limitations – including the evidence that AI actually introduces bias into these processes.

So, imagine you are a woman in prison. You don’t have access to the internet or a computer, let alone day-to-day exposure to AI through products like personal assistant Alexa or the facial recognition filters on Snapchat. But, increasingly, based on the above examples, your experience in the justice system could be mediated through an AI system. You probably won’t be aware that it’s even happening and at no point will anyone ask you whether you are comfortable with being treated in this way. And unless you are particularly in tune with what’s the privacy and civil liberties space, it’s unlikely you’ll be directly connected to an advocacy group working on these issues. While this is a very extreme example, it’s not outside the realm of possibility, and it highlights that those with the power to deploy AI are not necessarily the ones who will be acutely impacted by it. This is in the context of a lack of public discourse generally around the role of AI in people’s lives.

What can we do about it? The Government Digital Service’s Service Standards mandate that any digital service deployed by government needs to have user research and user testing before it can go live (i.e. an online form or the digital automation of a citizen-facing service like passport applications). This basically means, if something is to go online then government needs to understand how this will impact its citizens and what the design of that service needs to look like, based on explicitly gathered citizen requirements (rather than what government thinks the citizen requires). This results in better services for citizens and government leading the way in areas around digital accessibility. When it comes to emerging technologies like AI, we need both narrow and broad user research. Rather than form a view of what we think the public will deem acceptable, we should explicitly ask and test how people feel about AI adoption.

At Methods we have built some internal prototypes to help us think through the best ways to engage citizens in conversations around emerging technology and its use in society. In order to do this we drew on a combination of machine learning and user research skills and techniques. The blending of these two disciplines opens up a great opportunity for engaging citizens in a way that educates them about how this new technology works, but also gathers their views and reflections on how they’d like it deployed in their communities (if at all!).

I truly believe this issue around assumed AI inevitability needs to be openly discussed by the government with its citizens. Our Emerging Technology practice would love to hear what you think – as well as what you might already be doing about it!