• Data
  • Emerging Technology
  • Ethics
  • Insight

Ethics in action: The moral ground upon which we’ve built our tech has a fault line

By Lisa Moretti30 July 20192 min read

Over the last few years, companies and governments, independent councils and working groups, membership organisations and academics have been in a flurry writing ethical guidelines and principles. They’ve been doing this for artificial intelligence, machine learning, data (and data science) as well as technology more broadly. This has not happened without good reason. The social and political impact of algorithms over the last decade has resulted in a set of cascading consequences that has been akin to a tsunami (read more here and here). In the aftermath, it seems fair to say that the moral ground upon which we’ve built our tech has a fault line. There’s also another realisation that is starting to take root; we cannot code ourselves out of this one.

Technology is a human-made, human-processed cultural artefact and we have the power to shape it differently by not just coding it differently but by processing it differently. As such, we need to work towards creating new mental models that will allow us to reimagine organisational change and partnership agreements. We need to action the guidelines and principles that have been documented by developing work-flow processes that allow for ethical practice to be both pursued and performed. We need to reach for new methods that expand the limiting dialogue around, and focus on, users, to a more inclusive systems approach that takes communities and the environment into account.

How do we do this?

We’re keen to put guidelines and principles into action and to contribute to moving the tech ethics conversation forward by doing three things:

1. Support education efforts by bridging gaps in technical knowledge and social and cultural understanding. We’ve created a series of workshops (2hrs, 1 day and 2 days) that will allow us to do this.

2. Enable teams to action ideas by helping to think through their impact before designing and building them. Our social risk assessment acts as a speculative design tool allowing stakeholders to consider the (potential) impact of their work and map imagined consequences.

3. Be an execution partner for those organisations who are looking for a multi-disciplinary co-creation team that can think through the nitty gritty as well as the big picture. We’ve created a best in class end-to-end process that references work done to date by global ethical thinkers as well as the Office For AI. Our process supports innovators in ethical best practice as they create the next generation of technology products and services.

The principles and guideline documents that have been created to date have provided fertile ground for education, conversation and debate but now it’s time to transform talk into action. So whether you’re taking your first step or next steps, we’re looking forward to working with you.

For more information please email me at lisa.moretti@methods.co.uk