AI’s responsibility gap (and the messages of Dune)

AI introduces new challenges for assessing accountability. The algorithms’ scale and speed mean that when issues arise, the interconnectedness and reliance of the AI systems create a tangled web.

First published in Relevance Magazine.

‘If used responsibly’ often precedes many speculative discussions about the transformative potential of Artificial Intelligence (AI). For instance, it has been reported that NASA has committed to using AI responsibly to speed up discoveries. Nevertheless, governments in Australia and abroad continue to explore ways to regulate the technology without hindering its use, as ‘responsibly’ is such a broad term.

Are we talking about legal responsibility? Often, legal responsibility involves blame and punishment. We also have a moral responsibility, but moral responsibility can also be accompanied by blame or, sometimes, praise. The Royal Commission into the Robodebt Scheme exposed issues of legal responsibility and moral failings within the Australian Public Service (APS). The Australian Public Service Commission (APSC) is grappling with the consequences.

AI researchers discuss ‘artificial agency’. While the actions stay computational and embodied in a machine, the machine learns. If we look ahead 15 years (or maybe less) and ask who (or what) is responsible when AI learns not just from what it's given but also from what it takes from its ‘experience’, how much more complex would assigning responsibility become? 

Warnings from literature

Frank Herbert’s novel Dune has recently been adapted for the big screen. We often forget that it was first published nearly 60 years ago. The backstory for the Dune series is a key event: the ‘Butlerian Jihad,’ during which all technology is destroyed. The mantra repeated by the characters in the novel is, ‘Thou shalt not make a machine in the likeness of a human mind.’

Machines that think are an anathema to the inhabitants of the Dune universe. The understanding that permeates the culture is that humans set the rules—'We dump the things which destroy us as humans!’ Today, while we talk about keeping humans in the loop, we are actively pursuing the goal of the thinking machines that the inhabitants of the Dune universe so feared.

More recently, Kazuo Ishiguro’s Klara and the Sun (2021) is narrated by an Artificial Friend (AF), a solar-powered robot designed to assist in raising children who have been genetically engineered for enhanced academic ability. The story follows Klara’s experience in making sense of the complex human world filled with emotion. Klara and the Sun offers intriguing insights into how a robot created to provide empathy, care, and companionship learns. 

In Dune, we see a deep distrust of any technology that can match a human mind. In Klara and the Sun, we are welcomed into AI’s world and asked what defines humans from robots. 

What do we mean by responsibility?

We describe a person as responsible when they undertake duties and act under detailed direction from others. Managers are responsible for this in the workplace. We also assume responsibility for others and are accountable for their actions. This applies to parents as well. In both scenarios, we are accountable and actively answerable. However, all this occurs within the social norms of the time—and norms evolve. 

When assessing responsibility, there is a superficial judgement based on following laws and procedures, but there is also a deeper level of judgement that takes into account context and intent. 

AI introduces new challenges for assessing accountability. The algorithms’ scale and speed mean that when issues arise, the interconnectedness and reliance of the AI systems create a tangled web. 

Accidents involving vehicles that might be fully autonomous, partially autonomous, or fully under human control will quickly raise questions about the design and purpose of the technology and the extent to which humans are responsible for the machine's actions and their own.  

The complexity of AI systems, especially when they learn from environmental inputs like Klara, lacks transparency in a way similar to how people find it hard to explain exactly why they act. Most of us can broadly explain why we make a certain decision, but we would struggle to give the detailed thoughts and assessments that led to that decision. Yet, this is what we expect to do with machines. 

How knowledge is represented within the machine and its conclusions will likely become more opaque to both of us and the machine. The logic becomes harder to diagnose as more data points support learning. 

Not a tomorrow problem

The responsibility gap isn’t new, but the addition of AI makes it more complex. We live in a technological world that we have created. We shape it, and it shapes us. Nowadays, there’s very little we do that doesn’t happen within our tech-enabled world. It’s nearly unthinkable that we could step outside what we’ve built and still survive. 

If we want to avoid a dystopian world like in Dune, where all technology is met with dark suspicion, we need to ponder more deeply about Klara and what it truly means to be human in a world where technology possesses human-like capabilities. We must consider our responsibilities in the world we have created. If we keep viewing AI as simply gaining the ability to do what we do today more quickly, our civilisation is heading for the graveyard. 

Previous
Previous

Wonder, boredom, and technological forgetting

Next
Next

Zen and the ‘wicked problem’ of AI and digitalisation