Skip to main content Skip to search

YU News

YU News

Students Show How AI Solves Problems in Healthcare, Security and Education

Graduate students presented research spanning artificial intelligence, cybersecurity, healthcare and data analytics—projects that ranged from predicting hit songs and detecting cyberattacks to diagnosing childhood illness and translating sign language in real time.

By Dave DeFusco

On April 30, the Katz School’s Department of Graduate Computer Science and Engineering became a showcase of how today’s most advanced technologies can tackle real-world problems. Graduate students presented research spanning artificial intelligence, cybersecurity, healthcare and data analytics—projects that ranged from predicting hit songs and detecting cyberattacks to diagnosing childhood illness and translating sign language in real time. While the tools behind the work were complex, the purpose was clear: to build practical solutions that make systems safer, decisions smarter and technology more accessible to people everywhere.

“Our students aren’t just learning concepts,” said Ming Ma, an assistant professor in the Graduate Department of Computer Science and Engineering. “They are building systems that can improve healthcare, strengthen security and make technology more accessible.” 

One project, called Musical DNA, explored a question many people have wondered about: what makes a song successful? Benjamin Morris, a student in the M.S. in Data Analytics and Visualization, built a system that tries to predict how well songs will perform on streaming platforms. Instead of looking at just a few musical features, his model examined more than 700 factors, including lyrics, song structure and listening patterns. By combining all this information, the system became much more accurate, reducing prediction errors to approximately 4%. The goal wasn’t just prediction, however; the project also revealed patterns that artists and producers could use to make better decisions about their music.

Benjamin Morris, a student in the M.S. in Data Analytics and Visualization, built a system that tries to predict how well songs will perform on streaming platforms.

Tendai Nemure, a student in the M.S. in Cybersecurity, tackled a very different challenge: how to protect companies using advanced AI systems. Today, many organizations rely on large language models—AI systems that can read and write text—to help analyze security alerts, but Nemure showed that these systems can be tricked. By inserting misleading information into a company’s knowledge base, an attacker could quietly influence how the AI thinks, potentially causing it to ignore serious threats. To address this, Nemure designed a layered defense system that checks for signs of manipulation before decisions are made. The system combines pattern detection, behavior analysis and a second AI “reviewer” to flag suspicious activity. The result is a more reliable way to keep organizations safe in an era when even the tools meant to protect us can be targeted.

Some projects focused on healthcare, where the stakes are especially high. Tadiwa Chiremba, a student in the M.S. in Artificial Intelligence, developed a tool to help diagnose lung conditions in children using sound recordings. In many rural areas, especially in parts of Africa, access to trained doctors and medical equipment is limited. Chiremba’s system uses a lightweight AI model to analyze lung sounds and identify signs of illness. Even in its early stages, the system showed promising results, correctly identifying abnormal sounds with strong accuracy. The long-term goal is to run this technology on small, affordable devices that can be used in remote clinics.

Another healthcare-focused project by Mehluli Nokwara, an artificial intelligence student, addressed a different problem: knowing when AI systems are unsure. In medical settings, overconfidence can be dangerous. Nokwara developed a method to measure how uncertain an AI model is when answering clinical questions. By analyzing patterns in the model’s responses, the system can decide when it’s better not to give an answer at all. This kind of “knowing what you don’t know” is essential for building safer AI tools in medicine.

Cybersecurity remained a major theme throughout the event. Daniel Lodi introduced Defenstra, a platform designed to help small and mid-sized businesses understand their security risks. Many smaller organizations lack the resources to hire experts, leaving them vulnerable to attacks. Defenstra combines questionnaires, public data and threat intelligence to create a clear, prioritized risk profile. Instead of overwhelming users with technical details, it offers practical, low-cost steps to improve security.

Doctoral student Namrata Patel published research on 4EV, which studies how artificial intelligence can make video editing easier and more natural. Instead of requiring complicated editing tools, the system lets users type simple instructions, such as changing movement in a scene or adjusting the background. The AI then creates smoother motion, keeps objects looking consistent from frame to frame and makes scene changes appear more realistic.

Other students explored how AI could improve learning and communication. Data analytics student Brighton Mukundwi worked on a real-time sign language translation system that uses a standard camera to interpret gestures and convert them into spoken or written language. By avoiding expensive hardware, the system could be used in a wide range of settings, making communication more accessible.

Meanwhile, computer science student Ngoni Shaani’s ZimEdu platform focused on education itself. The system helps teachers align lesson plans and materials with official curriculum standards, reducing preparation time while improving consistency. It shows how AI can support educators, not replace them, by handling repetitive tasks and allowing teachers to focus on teaching.

A project called TechBuddy by artificial intelligence student Gregory Schwartz addressed a growing digital divide. As technology becomes more central to everyday life, many older adults struggle to keep up. TechBuddy is designed to act on behalf of users by solving problems like fixing a printer or setting up Wi-Fi without requiring step-by-step instructions. The goal is to restore a sense of independence and confidence.

Some projects pushed the boundaries of what AI can do. Artificial intelligence students Vinod Kumar and Hyeonwook Kim created RoboMascot, which connects language, video generation and robotics to create expressive movements in a humanoid robot. Others, like Vibex, explored how people can build software simply by describing what they want in natural language, lowering the barrier to entry for programming.

Despite the variety of topics, a common thread ran through all the work: the desire to make complex systems more useful, more understandable and more human-centered. The presentations were a glimpse into the future—one shaped by students who are not only learning how technology works, but asking how it should be used.

“These students were solving real problems,” said Honggang Wang, chair of the Graduate Department of Computer Science and Engineering. “They were building tools to help doctors, protect businesses, support teachers and connect people. Perhaps most important, they were showing that behind every line of code is a human intention: to make life a little easier, a little safer and a little better.”