As TikTok and Meta race to flag and remove hateful comments swirling through their platforms, generative artificial intelligence is making it even harder to keep up.
"It's an everlasting game of chasing," Yfat Barak-Cheney, director of technology and human rights for the World Jewish Congress, said Wednesday at the Eradicate Hate Global Summit in Pittsburgh.
But nonprofits using AI for good may be able to help.
In its first three months partnering with TikTok, CyberWell, an Israeli tech nonprofit backed by the Shear Family Foundation of Pittsburgh, alerted the short-form video company to three major antisemitic trends.
The most heinous conspiracy theory claimed that Jews were killing babies for ritual sacrifice and selling the bodies to McDonald's for hamburger meat.
"Before you lose hope, or your lunch, we'll go into the wonderful successes of actually working together with TikTok," Tal-Or Cohen Montemayor, CyberWell's founder and executive director, told an audience of about 200 at the summit.
Ms. Montemayor said the 2018 massacre at Pittsburgh's Tree of Life Synagogue convinced her to leave her career in law to focus on hate research. By making that research open source, CyberWell's data could eventually be used to train generative artificial intelligence models that are quickly becoming part of social media platforms, Ms. Montemayor said.
She noted that one social media company, X, was not part of the panel that included policy officials from TikTok, Microsoft and Meta.
Formerly known as Twitter, X saw a steep drop-off in content moderation after it was purchased by billionaire Elon Musk in October 2022.
"It's very concerning," Ms. Montemayor said of the absence. "Actually, these platforms are the ones we need to be focused on."
Still, others at the summit noted that extremists are continuing to reach large audiences on TikTok and other mainstream platforms despite efforts to remove harmful content.
During two weeks in July, the Institute for Strategic Dialogue charted over 1 million TikTok views of 20 ISIS accounts.
The Institute, a U.K.-based nonprofit, also found a 106% increase in antisemitism and a 69% increase in accounts following known misogynist and abusive accounts on Twitter after Musk took over the platform.
More concerning, some platforms were actively recommending harmful content to users through hashtags and other promotion.
TikTok removes 90% of harmful content proactively, with 75% of problem videos taken down before they get a single view, Valiant Richey, TikTok's global head of outreach and partnerships for trust and safety, said at the summit.
He highlighted a promotional video viewed more than 250 million times that encouraged users to adjust their settings and report hateful content, saying "we want to empower users."
But reporting individual pieces of content is often a frustrating and insufficient process, said Ms. Barak-Cheney of the World Jewish Congress. Instead of focusing on reporting, companies should target the offline issues that lead people to radicalization, she said.
"Hate is a process," she said.
Meta has banned holocaust denial since 2020, although its implementation of that policy has been more spotty.
On Wednesday, Dina Hussein, the company's global head of policy development and expert partnerships for counterterrorism and dangerous orgs policy, said it's great to build a policy but "very difficult if you can't deploy it.”
She said some new technologies have helped speed up flagging efforts. That includes working with tracking partners, identifying "clusters of abusive networks" and trying to prevent recidivism from specific actors.
Ms. Hussein did not specifically list generative AI, but she did talk about how quickly technology is changing.
"While we're advising our tactics, the adversary is also mutating," Ms. Hussein said.
One way Meta and TikTok have tried to tamp down on holocaust denial is by redirecting users to educational sites.
By pointing users toward authoritative partners who might be more influential, Meta can avoid coming across like "your parents telling you not to do drugs," Ms. Hussein said.
Ms. Barak-Cheney called the redirect "a way to catch people at an early stage."
Smaller platforms, whose trust and safety teams tend to focus most of their time on spam, should also be incentivized to take on that work, she said.
Content review teams, which are often under-resourced, could augment their work with AI so that the humans are free to spend more of their time on the most impactful cases, said Michael Pappas, CEO and co-founder of Modulate, a Boston startup using AI to curb toxic conversation among gamers.
But the new tools aren't a panacea.
"Please don't just trust an AI to moderate your content," he told the panel. "It's not the right way to use this technology."
Microsoft was the most optimistic of the three major tech companies on the use of AI.
"It's difficult to think of a problem or a challenge that we face as a society that AI can't contribute to resolving," said Hugh Handeyside, senior policy manager of Microsoft's digital safety office.
"There's just untold potential."
The company is currently rolling out a content safety system as part of its OpenAI Service for Azure enterprise customers, Mr. Handeyside said. The service added ChatGPT capability in March.
Eventually, Mr. Handeyside said, AI could help the company "mitigate risk … in a global and nuanced way."
Evan Robinson-Johnson: ejohnson@post-gazette.com
First Published: September 27, 2023, 11:01 p.m.
Updated: September 28, 2023, 2:23 a.m.