This article from MIT Technology Review features an interview with Jess Whittlestone at the University of Cambridge's Leverhulme Centre for the Future of [Artificial] Intelligence. Whittlestone is discussing the need for what she and colleagues are referring to as "ethics with urgency" for AI.
"With this pandemic we’re suddenly in a situation where people are really talking about whether AI could be useful, whether it could save lives. But the crisis has made it clear that we don’t have robust enough ethics procedures for AI to be deployed safely, and certainly not ones that can be implemented quickly. ... Compared to something like biomedical ethics, the ethics we have for AI isn’t very practical. It focuses too much on high-level principles. We can all agree that AI should be used for good. But what does that really mean? And what happens when high-level principles come into conflict? For example, AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.
"AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions. ... We need to think about ethics differently. It shouldn’t be something that happens on the side or afterwards—something that slows you down. It should simply be part of how we build these systems in the first place: ethics by design. ... What we’re saying is that machine-learning researchers and engineers need to be trained to think through the implications of what they’re building, whether they’re doing fundamental research like designing a new reinforcement-learning algorithm or something more practical like developing a health-care application. If their work finds its way into real-world products and services, what might that look like? What kinds of issues might it raise? ... NeurIPS [a leading AI conference] now asks researchers to include a statement at the end of their papers outlining potential societal impacts of their work. ... What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. That’s why it’s important for these different groups to get used to working together."
Leave a Reply.
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
This blog also appears on Facebook: