Site icon Kaptur

10 questions for an AI Ethicist : Ravit Dotan

Portrait of the blindfolded goddess of justice Themis isolated on black background with copy space, as a legal concept

An arachnid man once said: ” with great power comes great responsibility”. In the case of AI, this statement could not be more accurate. With the quick appearance and large availability of generative AI, issues around ethics and responsible behaviour have increased, forwarding the debate to the front of many conversations. A handful of companies and institutions are proactively tackling the issue by proposing and instituting clear ethical processes, hopefully carving the way for others to follow. We met with one of the key actors in this burgeoning and important field of ethics and AI, Ravit Dotan, with whom we discussed ethics, AI and visual content:

A little bit about you: What is your background?

Ravit Dotan, PhD, AI Ethics Expert

I am an AI ethicist specializing in moving AI ethics from talk to action. My work spans across sectors, connecting academia, the private sector, and the non-profit space. In academia, I earned a PhD in philosophy at UC Berkeley, and I am currently a researcher at the University of Pittsburgh. In the private sector, I lead AI ethics efforts at Bria.ai and I am the VP of Responsible AI at the AI Responsibility Lab. In the non-profit sector, I am the AI Ethics Implementation Lead at Women in AI and the Specialist AI Ethics Advisor at MKAI.

Are machines fundamentally bad? Why do we need to define ethical rules for them to follow?

Machines are not fundamentally good or bad. The people who create, use, and invest in the machines are the ones who need ethical guidelines. These people are probably not fundamentally good or bad either. But they may be under financial pressure, unaware of the impacts of their actions, negligent, etc. These and other factors push people to make choices that harm others. Typically, the goal of ethical guidelines is to guide the people who are responsible for the machines to make good choices.

If they are intelligent, shouldn’t they know right from wrong? 

Many, myself included, do not see AI systems as intelligent in the sense that humans are. However, even if AI systems were to possess the same kind of intelligence as humans, it wouldn’t guarantee anything about their moral choices. As history has shown us, intelligence doesn’t guarantee ethical behaviour. 

Ethics can be perceived as tied to culture and changing over time. How do you define and set rules that work for the whole world and for all time?

We will probably never have a system of ethical guidelines that would always work for everyone. We also don’t have many laws that work for everyone everywhere, if any. The challenge in tech ethics, as in many other areas, is to navigate despite the fact that we don’t have that. In the field of AI, many take up the challenge by extending widely accepted legal and ethical frameworks to AI and by forming international coalitions. For example, the EU AI Act, the proposed bill to regulate AI in the EU, aims to protect fundamental rights, such as non-discrimination, freedom of expression, and respect for private life. 

Ethics is tied to consequence. An action unethical value is measured based on the resulting consequences.  How do you infer this when you program a machine to generate images?

Many ethical approaches do not tie ethics to consequences. Setting this aside, there are various consequences to consider when it comes to AI-generated images. For example, what are the generated images used for? People sometimes use AI-generated images to bring about negative consequences intentionally. Computer-generated images have contributed to political instability (e.g., fake news), fraud (e.g., falsified documents), etc. Another set of consequences to consider is related to what it took to develop the algorithm. Developers of AI algorithms that generate images often violate copyright and privacy because the images they use to build their algorithms are taken without the consent of the images’ owners. Last, another set of consequences has to do with unintended bias in the algorithm. Suppose one asks the algorithm to generate an image of a doctor. If the generated images only present white men as doctors, it reinforces a social bias.

Ethics can be used as an instrument of censorship and oppression. Who watches the watchers?

I would suggest building on mechanisms we use to deal with this problem in other areas. For example, whistleblowers are in a position to expose the misuse of ethical guidelines. Organizations that support whistleblowers can be helpful in the field of AI ethics as well. Having said that, we currently don’t have ironclad ways to stop people from abusing ethics guidelines. 

Learning the rules of unintended consequences via Dalle-2

In the same process as you have with GAN, could we imagine a Generative adversarial network that would challenge content based on ethical outcomes and only certify those who are successful? In other words, give your job to an AI?

When we think about giving jobs to AI, what we really need to be thinking about is the people who would be creating and maintaining that AI. There would be people who choose how to design that AI, which data to feed it, which success measures to use, and so on. These decisions will be crucial to the outputs of the algorithm. Therefore, the question to ask is not whether AI should replace AI ethicists. Rather, the question is whether AI ethicists should do their job entirely by creating AI algorithms, which they would continuously monitor and maintain. The answer to this question is negative. As I see it, organizations need AI ethicists to help them identify the social impacts of what they do, think through these impacts, build strategies to address them, implement those strategies, etc. AI algorithms can assist with these tasks, but AI ethicists need tools that go far beyond them. Moreover, the work of AI ethicists also includes activities such as developing theories and a deeper understanding of many AI-related issues. This work cannot be outsourced to algorithms.

A lot of conversations about ethics and AI turn around how AI our trained. In the sense that if the training data is “ethically clean”, the resulting AI will also behave ethically. Is that your position as well?

I wouldn’t describe machines as behaving ethically or unethically. The people who are using the recommendations that the algorithms make are the ones who carry the social responsibility. Therefore, the question to ask is when it would be acceptable for people to adopt an AI recommendation. In particular, whether information about the data that the algorithm is fed is enough to decide whether to accept the algorithm’s recommendation. It is not, because many other factors matter too. For example, certain decisions may have a different impact on different populations. Even if the algorithm is equally accurate on all social groups, existing social inequalities may make it the case that mistakes are more costly for social minorities. In which case, adopting the algorithm’s recommendations will exacerbate the inequality, regardless of the constitution of the dataset.

Are we demanding and expecting more from AI than from human beings? Are we less tolerable when a machine breaks the rules than when humans do the same? If so, why?

I’m not sure whether people demand more from AI than from people, but there are reasons to be very strict with our expectations of automated decisions. When something goes wrong with an AI system, it goes wrong at scale because AI systems typically make recommendations about masses of people. Adopting these recommendations without further scrutiny means risking harming masses of people with a click of a button.

Should there be an ethics tribunal for AI?

Ethics review boards are standard practice in many fields, and they can be helpful. For example, research institutions require studies to be cleared by an Institutional Review Board (IRB) when they involve human subjects. Many organizations are setting up similar structures in AI ethics too. When built well, these ethics review boards can provide the oversight organizations need to implement ethics guidelines. 

 

Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Stipple, and more. Melcher received a Digital Media Licensing Association Award and is a board member of Plus Coalition, Clippn, and Anthology, and has been named among the “100 most influential individuals in American photography”

Exit mobile version