Skip to main content Skip to search

YU News

YU News

Leaders in Tech, Research Unite to Address the Growing Threat of AI-driven Hate

President Ari Berman, third from left, and Tamar Avnet, fourth from right, director of graduate programs and associate dean, joined Chen Shmilo, right, 8200 Alumni Association former CEO, in a discussion on “AI vs. Antisemitism: Defending Truth in the Age of Generative Hate” at Yeshiva University Museum.

By Dave DeFusco

On the evening of Yom Hashoah, the galleries of the Yeshiva University Museum filled with a different kind of remembrance—one not only rooted in memory, but in urgency. The program, “AI vs. Antisemitism: Defending Truth in the Age of Generative Hate,” gathered technologists, researchers and community leaders to confront a rapidly evolving threat: a world in which artificial intelligence can both distort reality and defend it.

From the opening moments, organizer Chen Shmilo, 8200 Alumni Association former CEO, made clear that the event was not only about remembrance, but about responsibility. As a descendant of Holocaust survivors, he spoke to the stakes of allowing hatred—now amplified by algorithms—to go unchecked. The initiative he helped launch, “Hack the Hate,” reflects a growing recognition that the fight against antisemitism has entered a new domain: digital, decentralized and increasingly powered by AI.

That urgency was echoed by Yeshiva University President Ari Berman, who described the Holocaust as “an assault on reality itself.” Today, he warned, that distortion has migrated “from propaganda to platforms,” where misinformation can spread at unprecedented speed. Yet the same tools, he said, can also be used to “increase light.” Throughout the evening, that duality—AI as both risk and opportunity—remained central.

Tamar Avnet, left, director of graduate programs and associate dean in the Sy Syms School of Business, and Liram Koblentz-Stenzler, head of the Antisemitism and Extremism Desk at Reichman University in Israel, discussed just how early and subtly antisemitic narratives can take root.

During a fireside discussion, Yfat Barak-Cheney and Ben Good of Meta explored how platforms are grappling with AI-generated antisemitism. Good explained that combating harmful outputs requires more than rigid rules. Instead, AI systems are increasingly trained to understand intent—why certain content is harmful—so they can apply that reasoning dynamically across new and unpredictable scenarios. But even as companies refine safeguards, the landscape outside their platforms continues to evolve in ways that are harder to detect and control.

Research presented later by Liram Koblentz-Stenzler, head of the Antisemitism and Extremism Desk at Reichman University in Israel, in conversation with Tamar Avnet, director of graduate programs and associate dean in the Sy Syms School of Business, revealed just how early and subtly antisemitic narratives can take root. Monitoring online ecosystems, Koblentz-Stenzler described how extremist ideas often begin far from mainstream platforms—in gaming communities, alternative networks and hybrid digital spaces where ideology blends with entertainment and even finance.

“I monitor consistently this kind of content,” said Koblentz-Stenzler, describing how online video games like Call of Duty or Roblox can become unexpected entry points. Conversations that begin innocently about gameplay can gradually introduce coded language or slurs. Those who respond are then quietly funneled toward more radical spaces, such as private messaging channels.

What makes this particularly troubling, she said, is the age of the participants. “You can hear the voices of the kids, sometimes even parents telling them it’s bedtime,” she said.

The process is gradual—what she described as “soft radicalization”—but its endpoint can be far more severe. That same pattern of normalization appears in more surprising places. In her latest research, Koblentz-Stenzler traced how antisemitic language and imagery are being embedded into cryptocurrency ecosystems. Meme-based digital coins, whose value depends on visibility and virality, can incorporate slurs and conspiracy theories into their branding and promotion. As these assets circulate, they draw in ordinary users with many unaware of the underlying associations.

Over time, the language itself becomes detached from its origins, normalized through repetition. “The wallets don’t necessarily belong to extremists. Regular people trade with it,” she said. “When people see the word later, they won’t understand that it’s antisemitic.”

The implication is stark: antisemitism today is not only spreading, it is being disguised, repackaged and integrated into everyday digital experiences. This aligns with a broader shift highlighted throughout the program. Unlike the centralized propaganda of the past, contemporary antisemitism is decentralized and often difficult to recognize. It moves fluidly across platforms, communities and formats by adapting to evade detection and resonate with new audiences.

That makes early detection critical. By analyzing fringe platforms and emerging trends, researchers can identify “signals” before they reach mainstream visibility. This proactive approach, several speakers suggested, may be one of the most effective ways to counter the spread of digital hate. Yet the evening was not solely focused on threats. It also showcased how AI can be harnessed to counter them.

From tools that detect harmful narratives in real time, to projects that use AI-generated avatars to safely share survivor testimony, the event highlighted a growing ecosystem of technological responses. Initiatives like the “One Signal Collective,” launched at the program’s close, aim to bring together engineers, researchers and community leaders to build coordinated, scalable solutions. The message was clear: the fight against antisemitism in the digital age cannot rely on any single institution. It requires collaboration, across sectors, across borders and across disciplines.

As the program concluded, the significance of the setting lingered. On a day dedicated to remembering the consequences of unchecked hatred, the conversation had turned toward the future—toward algorithms, data and the systems that will shape how truth is understood in the years ahead.

“In that future, the challenge is not only to confront hate when it appears,” said Professor Avnet, “but to recognize how it evolves and to ensure that technology, rather than amplifying distortion, becomes a force for clarity, accountability and truth.”