Even a $2.4 billion lawsuit won’t stop online hate – Silicon Valley was purpose-built to scale it up

In October 2021, Professor Meareg Amare was doxxed on Facebook. Weeks later, he was brutally murdered in his own home – writes Courteney Mukoyi . Now, Facebook’s parent company Meta is facing a $2.4 billion lawsuit, accused of inciting violence in Amare’s native Ethiopia. Amare’s doxxing was no accident. His information didn’t just fall into the wrong hands – it was deliberately amplified as a call to violent, vigilante justice.
In today’s world, the state’s role in investigating wrongdoing and protecting the innocent has been superseded by online mobs who act as judge, jury, and executioner.
And not only in Ethiopia.
In India, 20 people were murdered in the space of two months after being accused of child abduction over WhatsApp in 2018.
Soon after, Germany witnessed xenophobic riots after social media rumours claimed a German was killed by asylum seekers. And just last year, historic riots broke out in the UK following the tragic killing of three young girls in Southport – fuelled by a wave of disinformation about the identity of the alleged killer.
In each of these cases the disinformation targeted minority or Muslim communities, who are already face record levels of hate crime.
From the moment an algorithm is born, it begins to mutate, learning how – and who – to hate based on the data and patterns it learns from. Like the butterfly effect, even the smallest trace of bias or discrimination is instantly amplified.
From then on, it quickly becomes unrecognisable. Uncontrollable. Even to its creator.
But even if he wanted to, Mark Zuckerberg can’t pull the plug.
It’s not only that the machine has outgrown its maker, it’s also that none of the relevant players have the full toolkit to contain it. Tech giants have the technical expertise but lack either the will or the legal freedom to act. National governments have legislative power but move far too slowly to keep pace with innovation. And civil society organisations may understand the harms better than anyone yet lack the infrastructure or influence to enforce change.
Take, for example, the EU’s Digital Services Act – first proposed in late 2020 but not enforced until 2024. In the lengthy period of its development – ChatGPT was only released in 2022 – the digital landscape dramatically shifted, leaving its rules outdated before they were even introduced.
Since the act’s introduction, enforcement – which includes measures to curb online hate speech and algorithmic amplification of extremist content – has proven difficult. Attacks on minorities, including Muslims and Jews, are at an all-time high. The far-right have even used the act as justification for their own dehumanising rhetoric.
At the UN last month, Mohammad Al-Issa, secretary-general of the Muslim World League, claimed Islamophobia is a growing crisis. Al-Issa has repeatedly warned that AI could weaponize social media, spread disinformation, and recruit extremists – which is exactly why he argues faith leaders must be included in the development of ethical frameworks. His warnings echo those from Pope Francis and the Vatican who claim AI is a ‘shadow of evil’.
Al-Issa’s efforts to counter hate long predate today’s AI debate. In January 2020, he led a delegation of Muslim leaders to the Auschwitz-Birkenau memorial – one of the most senior Islamic visits to the site – underscoring a broader commitment to confronting the legacy of hatred and violence. Today, as war rages in the Middle East, that commitment to fostering interfaith understanding is more urgent than ever.
“If faith leaders do not have a seat at [AI] events,” Al-Issa cautioned in a recent article, “…the evolving debate on AI would be missing important subject-matter experts that are necessary to avert any possibility of a new era of AI-powered extremism.”
If any progress is to be made, technology, policy, and community must stop working in isolation and start building a shared, coordinated response.
First, we must invest in research and development beyond the Silicon Valley bubble, simultaneously reducing our dependence on the increasingly erratic governance of President Trump. Next, we should introduce new rules, requiring corporations and organisations to consult community stakeholders throughout their operations, and to ensure that algorithms are trained on inclusive datasets.
At the same time, the EU must explore new tools to detect and mitigate the inevitable bias of AI systems. We should also roll-out universal AI literacy programmes and accountability platforms, educating the public on potential threats to and empowering them to report any complaints.
What happened in Ethiopia or Southport is not an anomaly, it’s a warning that today’s platforms are no longer neutral spaces. They are active agents in shaping public sentiment and behavior – too often in ways that incite violence and polarize societies.
Ultimately, this lawsuit against Meta isn’t just about accountability—it’s about prevention. Regulatory standards will certainly help, but restrictions must be balanced out by dPhoto by Timothy Hales Bennett on Unsplashiversification.
Only then can we retain the benefits of these technologies without surrendering the very idea of justice to an uncontrollable algorithmic mob.
About the author:
Courteney Mukoyi is the founder and director of the Justice Code Foundation, which leverages existing and emerging technologies to promote human rights in Zimbabwean communities. He is a CivicTech enthusiast with a Master of Laws degree in International Trade from the University of Cape Town.
Muyoki has received multiple awards, including the 2022 Democracy Innovation Award and the 2023 Wangari Maathai AI Impact Award. In 2023, he was selected for the Mandela Washington Fellowship. He is currently serving on the UNHRC Youth Advisory Board and the European Union Youth Sounding Board. He has worked with various organizations, including the Accountability Lab and the African Union, to advance the use of technology and artificial intelligence in human rights and civic engagement. He is a serial entrepreneur who also founded an InsurTech startup called TopSure, which began his journey to create unicorn startups in Africa.