By Ilias Karagiannis.
John Tasioulas is a proud member of the Greek community in Melbourne, a Professor of Moral and Legal Philosophy at Oxford University’s School of Philosophy and the first Director of the Institute for Ethics in Artificial Intelligence (AI).
In an exclusive interview with The Greek Herald, Professor Tasioulas speaks about the challenges and opportunities arising from the emergence of AI, as well as the ethics and human responsibility around the emerging issue.
The Prime Minister of Greece, Kyriakos Mitsotakis, recently described AI as the biggest revolution in the history of humanity and emphasised the need to prepare for significant changes in employment and education in the coming years. What are your thoughts on his statement?
I agree that AI seems to have enormous transformative potential, both for good and bad. Whether this will be greater than the impact of fire, the wheel or electricity is something I can’t judge.
But I’d like to shift the perspective of your question somewhat. Instead of talking about us ‘preparing’ for a revolution that is going to happen anyway, as if we are simply passive spectators, we should recognise that how AI develops is a matter for individual and societal choice. For example, whether AI is used primarily to replace humans in the workplace or to subject us to surveillance, or whether it is used to help us develop greater work skills or improve access to health care and so on, is a matter of who gets to make important choices and which values guide those choices.
My own view is that we need to ensure that there is an informed citizenry that is empowered to make these choices for the common good, rather than large corporations or governments that are in thrall to them.
It is widely acknowledged that AI holds immense potential. From your perspective, what do you consider to be the most significant challenges and dangers associated with AI?
The most significant danger associated with AI is that its development and deployment will not be steered by democratic decisions in the service of the common good. That, instead, it will be developed in whatever ways enrich large tech corporations or strengthen authoritarian control by governments. One of the important roles that academic institutions, like Oxford’s Institute for Ethics in AI, can play is to make it clear to democratic publics that they have a choice and to elevate the quality of the public discourse around AI so those choices are well-informed.
At the moment, I fear that this discourse is dominated by the tech industry, which is strongly resistant to regulation and which often seeks to divert attention away from the good and bad that AI-based technologies do here and now to highly speculative scenarios of existential risk based on the future emergence of Artificial General Intelligence (AI systems that equal or surpass human cognitive capacities across the board).
As AI continues to evolve rapidly, what are your thoughts on the ethical implications of advanced AI technologies, such as autonomous weapons or deepfake technology?
The ethical implications are many and diverse depending on the domain of life in question, whether it be warfare or heath care or something else. We need to ask ourselves what role AI can play to enhance the quality of individual human lives and to strengthen rather than weaken our democratic systems of governance. Autonomous weapons systems are a particular challenge, not least because the kind of machine learning approach that is dominant in contemporary AI leads to automated systems that can make egregious errors that no human would ever make. So restricting the use of AI in the military context seems a high priority.
Quite apart from the issue of error, there is the fact that the ways in which AI systems arrive at results are often opaque even to their own creators. One issue we will urgently need to confront is whether there are some decisions that should be exclusively reserved to human beings. Maybe there is, in certain domains, even a right to a human decision. I gave a lecture on this topic recently at Princeton University.
Do you believe there should be global regulations or standards in place to govern the development and deployment of AI? What challenges do you foresee in implementing such regulations?
Global regulations are vitally necessary not least because of the acute risk of a disastrous AI arms race among the leading powers, the US and China. But there are serious challenges in the way of such regulations being established. At a practical level, verification is a lot harder with respect to AI technology than, for example, nuclear technology. At a deeper level, we are living at a time of increased polarisation and tension between the US and China. This is a problem far bigger than just AI regulation of course.
But as the example of John F Kennedy shows, when he managed to negotiate the Nuclear Test Ban Treaty with the Soviet Union in the midst of the Cold War and in the teeth of opposition from his own military establishment, it is possible to overcome the obstacles of ideological polarisation and mutual demonisation in order to act for the good of humanity as a whole.
On yet another level, the West has lost considerable moral authority so far as upholding any international law standards is concerned in light of the blatantly illegal Iraq War. So yes, there are huge obstacles to effective global regulation of AI, but we cannot afford to abandon hope.
How can we promote AI literacy and ensure that individuals understand the potential risks and benefits of AI technologies?
We all have a role to play in fostering healthy democratic debate about the risks and benefits of AI. Journalists, for example, need to ensure that they are informing their readers of the impact of AI here and now, such as the dependence of AI technology on the vast amount of digital data created by ordinary people who have no real alternative but to use the online platforms from which their data is harvested, and the deployment of AI for surveillance or to replace humans in the workplace for only small productivity gains, and so on. All too often, however, journalists succumb to the temptation of clickbait stories about an imminent ‘robot apocalypse’ that distracts public attention from these pressing issues.
Academic institutions also have an important role to play. The values at stake vary according to the different domains of life in which AI might be deployed, from medicine and law to management and education. So we need genuine experts in these fields to help us grapple with the distinctive challenges in each domain. It’s one question to ask, for example, whether an AI system should be used in cancer detection, another to ask whether it should be used in the sentencing of criminals. Academic experts need to produce rigorous research and then translate it into an accessible format that can feed into democratic deliberation. There are now many good books being published about the social consequences of AI, but one that I would especially recommend is Daron Acemoglu and Simon Johnson’s recent book Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity.
As we are a Greek newspaper in Australia, I would like to inquire about your background and how it has influenced your career.
My parents emigrated to Australia from Dasylio, a small village near the city of Kozani, in the early 1960s. I was brought up in Melbourne and studied law and philosophy at Melbourne University. I came to the UK in 1989 to do graduate study at the University of Oxford and have pursued my career in the UK since then. I do my best to visit both Australia and Greece at least once a year, although the pandemic has made this difficult recently.
I was brought up by my parents to be tremendously proud of my Greek heritage, to regard it as a source of inspiration and strength. I think my Greek heritage played an important role in my choosing the unorthodox path of a professional philosopher. It was a way of affirming my Greek identity growing up in a rather hostile environment which was instructing me, in various way, that this identity was not something to be proud of.
I came across a copy of Aristotle’s treatise The Politics, when I was a teenager, and my interest in philosophy snowballed from there. I still regard myself as an Aristotelian today because I believe that the flourishing of the individual human being (eudaimonia) is central to ethics, and that central to human flourishing is the exercise of our rational powers. This is why AI is such a distinctive challenge – because it is the first time in human history that we have a technology that can perform, across a broad range, many activities that have traditionally required the exercise of human intellectual capacities.
In fact, I am currently engaged in a joint project with Josiah Ober, a distinguished historian of ancient Greece, who is based at Stanford University, that aims to bring an Aristotelian approach to AI ethics. We believe Aristotle’s central ideas offer a powerful corrective to currently dominant approaches that emphasise the satisfaction of subjective preferences or the promotion of economic growth. We hope to hold a conference in Athens on this topic next summer near the newly refurbished site of Aristotle’s ancient school, the Lyceum.