Meta isn’t setting its Oversight Board free just yet

The Oversight Board – the policy body created by Meta to weigh its most important moderation decisions – has seen its role within Mark Zuckerberg’s empire called into question due to changing content policy priorities and dwindling investment. The Oversight Board has taken steps to formalize its long-held desire to work with other companies, but Engadget has learned that Meta has so far refused to move forward with that process.
Over the last year, board members have become increasingly interested in AI policy and how their experience developing Meta’s content rules could translate into advising companies in the generative AI space. That interest has intensified as some AI companies have signaled privately that they would be willing to work with the board, according to a source close to the organization who was not authorized to speak publicly. The board began discussions with Meta last fall about the possibility, which would require the company to approve changes to legal documents that govern the board’s operations. But Meta officials have not indicated whether the company is willing to make the changes, which would likely require approval from top executives.
Platform, who first reported on Meta’s budget negotiations with the Oversight Board, noted that the company “has long encouraged the board of directors to seek additional sources of financing.” So far, no other companies have publicly expressed interest in working with the group, although the board has held conversations with other companies behind the scenes.
Oversight Board co-chairman Paolo Carozza told Engadget in December that there had been “really preliminary” discussions between the board and the AI companies, although he declined to name which ones in particular. “It seems like a completely different time now, largely thanks to generative AI, LLMs and chatbots. [and] “how a variety of users of these technologies at the retail level face a whole new set of challenges and harms that attract a lot of scrutiny,” he said at the time.
Meta has readily agreed to change the board’s governing documents in the past — such as when the trust that controls the supervisory board’s budget funded a new organization to arbitrate content moderation disputes in Europe. While Meta executives once promoted the idea of a seemingly independent oversight board working with other social media platforms, the prospect of the group working with a competitor in the pursuit of AI superintelligence is apparently more complicated.
Over the past five years, board members have received information from Meta officials about the inner workings of its moderation systems and other non-public details as part of their work with the company. That raises practical questions about how the board would protect Meta’s proprietary information, as well as broader strategic questions about whether Meta wants its supervisory board to work with some of the companies with which it now competes fiercely, the source said. It’s unclear how invested Meta’s current leadership is in securing the board’s future. Former president of global affairs Nick Clegg, who was one of the board’s strongest advocates, left the company last year.
Meanwhile, other board members have publicly argued that the group, made up of free speech and human rights experts from around the world, is well-positioned to guide AI companies grappling with a growing number of real-world harms. When Anthropic released a “Claude Constitution” earlier this year, the board released a lengthy analysis from member Suzanne Nossel arguing that Claude also needed the type of “oversight” that the board provided to Meta. She made a similar argument in favor of the broader AI industry in an opinion piece in The guardian last month.
Although Nossel denied directly introducing the Oversight Board to Anthropic, she said AI companies face many of the “same dilemmas” as social media platforms. “When the board was created, the idea was that we could work across the industry,” she told Engadget. “Today, as the world shifts toward an AI-centric paradigm, we are very interested in what our experience can bring to this conversation.”
Members of the Oversight Board, who naturally have a vested interest in expanding their scope, are not the only industry members warning that generative AI platforms are essentially the playbook for fast-moving social media companies. A former OpenAI researcher recently wrote that “OpenAI is making the mistakes Facebook made,” citing the AI company’s efforts to optimize engagement and its in-app advertising plans. The researcher cited Meta’s supervisory board as an example of the type of independent governance needed in the AI industry.
The question of collaboration with other companies has become all the more urgent as the Supervisory Board risks losing Meta’s support. In a statement, a Meta spokesperson pointed to earlier reports that Meta had committed to funding the board through 2028 and said “nothing has changed.” But a source close to the board told Engadget that Meta has so far handed over only half of the smaller tranche of 2028 funds to the board, amid ongoing discussions about its future, including whether it would expand its scope beyond Meta.
There are also very real questions about how the Oversight Board fits into Meta’s current content moderation strategy. Zuckerberg announced last year that Meta was moving away from more proactive moderation, ending fact-checking in the United States and rolling back rules on hate speech. Zuckerberg himself reportedly led the push for these changes following a meeting with then-President-elect Donald Trump. The Oversight Board, to which Meta sometimes sought advice on major policy changes, was not consulted. The company recently announced plans to reduce the number of human moderators in favor of AI-based systems.
“The Oversight Board is currently engaged in meaningful discussions with Meta regarding its future and the evolution of its model to ensure the organization can address the most pressing emerging challenges in AI governance, standards and accountability,” an Oversight Board spokesperson said in a statement. “At this time, no decisions have been made regarding the future of the Council, and the daily work and mandate of the organization remains unchanged.
Critics have long said the board, which received more than $280 million from Meta, is moving far too slowly. In just over five years of operation, the board has issued more than 200 decisions on specific moderation issues, which Meta is required to respect. These decisions – a tiny fraction of the millions of requests it receives – can take months, although the board can choose to act more quickly. The board also made hundreds of policy recommendations that Meta must respond to but is not required to implement. The company agreed to at least some changes in response to 75 percent of the recommendations, according to the board.
For the Supervisory Board, working with a company other than Meta would help address some of the challenges it currently faces. This would strengthen the group’s credibility at a time when Meta appears to be reassessing its relationship with the board, and would open the possibility of new sources of financing. But the situation highlights another long-simmering tension regarding the role of the “independent” monitoring organization. Meta has always controlled how much influence the group can actually have. And it’s unclear whether the company is willing to let the board, which has spent the past five years learning the finer details of Meta’s content moderation and political processes, advise the companies it now competes with.
During its work with Meta, the Oversight Board has spoken out on its AI rules several times. The board criticized the company’s “manipulated media” policy that governs deepfakes and other content, leading Meta to adopt new rules regarding AI labeling. In its most recent decision regarding AI, the board urged Meta to invest in better AI detection tools and collaborate more closely with other platforms. The company has not yet formally responded to these recommendations.



