Was this year’s UN Forum on Business and Human Rights going to be yet another gathering competing to fan fears about Artificial Intelligence, or would it also contribute concrete solutions for business on how AI can be developed safely and fairly?
The tone was set in the opening address by UN Human Rights Commissioner Volker Türk, who described AI as a 'Frankenstein’s monster' with ‘the power to manipulate and distort,’ while also acknowledging the technology’s ‘huge potential.’ Yet constructive ideas were clearly evident in Geneva, including in a specialist session devoted to the technology.
Governments, the Forum heard, wield enormous power — not only as regulators, but also as customers. As governments and academic institutions increasingly work with technolody companies to develop their own AI systems, they have every right - and a responsibility - to drive ethical standards by demanding transparency from their partners on sensitive issues.
However, research from the AI and Equality project, undertaken in conjunction with the University of Cambridge, reported that there is currently no evidence of public authorities choosing to do this. Instead, adoption is taking place almost unconsciously, across both public and private sectors.
Contributions to the Forum suggested that the rapid introduction of AI systems is often occurring through routine IT updates, without any special consideration — or even awareness — on the part of purchasing organisations.
A wealth of information already exists within the UN system to help address this gap. This includes a report by the UN Working Group on Business and Human Rights on the human rights impacts of Artificial Intelligence, published in June this year, as well as an analysis of company responses to generative AI released as part of the UN’s B-Tech project.
The Working Group’s report calls on companies to be far more aware of how AI is developed within and for their organisations. It also warns of potential litigation risks where this goes wrong, and offers guidance on how companies can bring much greater transparency to the process.
Luda Svystunova, Head of Social Research at investor Amundi, told the Forum that the difficulty of identifying risk due to the ‘black box’ nature of AI can only be overcome through more direct engagement between human rights experts and those responsible for AI development.
Many audience contributions pointed to the risk of discrimination arising from decision-making based on large language models trained on unrepresentative data. This risk, it was argued, could be exacerbated by low levels of AI literacy among vulnerable groups, further widening existing divides.
The overall consensus was that we are now at the end of a phase in which attention has focused primarily on AI companies themselves — and at the beginning of one in which responsibility shifts to all companies and how they manage the introduction of the technology.
One of the most straightforward suggestions of the week came from John Morrison, a long-standing business and human rights leader attending what he said would be his final Forum in that capacity. “AI is simply crying out for a basic multi-stakeholder initiative,” he said. He may well be right.
Those who remember the ‘labour behind the label’ campaigns on working conditions in apparel supply chains will also have noticed the emergence of ‘labour behind AI’ campaigns at this year’s Forum.
At a side event organised by UNI Global Union, precarious working conditions and severe psychological stress faced by what are likely millions of data annotators and content moderators worldwide were brought into sharp focus. One content moderator from Portugal, Eliza, told the Forum: “We work on the hard side, trying to prevent users from seeing very disturbing content — people cutting their wrists or attempting suicide. But I have to watch these images all day. Jesus Christ, I find it suffocating.
“As a woman, I found it unacceptable when I was required to watch adult content all day and then label different sexual positions or body parts. I asked to be moved off this work, but was forced back into it.
“The targets we are given are not achievable. The policies are complicated — 40 or 50 pages long — and we watch a thousand videos per day. It’s not just ‘accept or delete,’ but having to justify every decision.”
For labour unions, the core demand remains enabling platform workers to transition from casualised to formalised work, with proper protections, fair pay and worker organisation.
More detailed proposals emerging from worker consultation included rotating staff between harmful and less harmful content areas, limiting working hours, ensuring adequate rest breaks, and providing access to independent psychological support.
Christy Hoffman, General Secretary of the global trade union federation, identified supply chain transparency — at least from some technology companies — as the essential starting point. “There are whole conferences about trust in AI which almost completely ignore the human element in how AI is created,” Hoffman said.
Her colleague Ben Parton, Head of ITCS, was even more direct: “Discussion about threats to society from AI focuses on outputs, but how you create it is an input issue. If you treat people like garbage at the input, it’s bound to lead to garbage at the output.”
Protecting the public from harmful content is clearly in the broader societal interest. Notably, this is also an area where those interests align with the need for technology companies to ensure large language models are trained on accurate and reliable information.
This year’s Forum heard that achieving both goals requires bringing the human labour behind AI into clear focus.
This is the second of three articles by Richard Howitt from the UN Forum on Business and Human Rights 2025. Read the first article, When it Comes to DEI, Silence is Not an Option, here. The final article will examine how the Forum addressed the challenge of closing information gaps related to severe corporate human rights risks in conflict zones worldwide.
Get the latest insights, trends, and innovations to help position yourself at the forefront of sustainable business leadership—delivered straight to your inbox.
Sustainable Brands Staff
Published Dec 22, 2025 10am EST / 7am PST / 3pm GMT / 4pm CET