Speaking at the AI Impact Summit in New Delhi, Mr. Türk told UN News that the technology must be governed through a human rights framework that ensures transparency, accountability and inclusion.
This interview has been edited for clarity and length.
Volker Türk: Artificial intelligence is a technological tool and it needs to be developed on the basis of risk assessments. Technological tools are used to exercise power, for good but also for bad, so we need to make sure that there is a framework within which they are developed, designed and used, and that’s where human rights come in.
UN News: What are the biggest human rights risks that you see from rapid AI expansion today?
Volker Türk: There is a huge issue of inequity, and that’s why I’m so happy that this AI summit is taking place in India. It’s really important that these tools are used everywhere and that they are developed everywhere.
Then there’s the issue of bias and discrimination. If the data are only collected from one part of the world, if only men are developing AI, then unconscious bias will be built in. We believe that it’s key to be mindful of vulnerable groups and minorities because they are often excluded from AI development. It’s about meaningful participation and giving a vision of a better world. Human rights provide that vision.
UN News: Generative AI is moving faster than regulation. What guardrails must governments and companies put in place as a matter of urgency?
Volker Türk: Take the pharmaceutical industry as an example: testing can sometimes last for a long time because you need to make sure that any risks associated with a new product are identified before it goes on sale.
When it comes to AI tools, we need to demand that companies do a human rights impact assessment when they design, roll out and market them.
We have seen for quite some time now that some companies have bigger budgets than some smaller countries. If you are able to control technology not just in your country but around the world, you exercise power. You can use the power for good – to do things that hopefully help in areas such as health, education and sustainable development – but you can also use that power for bad things, such as automated lethal weapons, and spreading disinformation, hate and violent misogyny.
UN News: What kind of AI-driven governance or rules are required to prevent AI systems from reinforcing bias and inequality?
Volker Türk: I had a chance to talk with people who produce these things or develop them and design them. What strikes me is that they often have a very superficial knowledge about fundamental principles when they go into the development phase.
It reminds me a little bit of Frankenstein’s monster; you develop something that you don’t control anymore. You let the genie out of the bottle.
If you’re not mindful of the dangers and the risks, you can wreak havoc. We have seen it in Myanmar, for example, where there was a lot of hate speech against the Rohingya on social media platforms.
It’s so important to bring in the perspective of each and every segment of society, especially women and young people, and to bear in mind that our brains develop in different ways.
We don’t want to create addictions that poison our minds and souls. We also need to be aware of how harmful disinformation not only destroys the social fabric but also creates divisive and polarised societies where everyone lives in their own bubble.
We also see a lot of misogyny. Many female politicians tell me that they are thinking of exiting politics because of what they experience on social media.
UN News: Five years from now, what do you think responsible AI would look like?
Volker Türk: What I hope we would have is inclusive development of artificial intelligence, where power is no longer concentrated within a handful of companies in North America, and that AI development builds on the richness and diversity of all of us in each society.
I hope for an inclusive, meaningful, participatory type of development, that helps us solve the many problems and challenges in today’s world. The climate crisis, access to healthcare, education for everyone – AI can be a fantastic tool to help us achieve these goals.
The flip side is that, if we are not putting forward a vision of a better world, we could end up even more polarized, and where we have wars that are no longer controlled by humans. And that’s very dangerous.










