November 3, 2022

RFP winner Srijan Kumar shares his research on reducing cyber harm

By: Meta Research
Credit: Kevin Beasley, College of Computing at Georgia Institute of Technology


From time to time, Meta invites academics to propose research in specific areas that align with our mission of building community and bringing the world closer together. To learn about calls for proposals, visit our Research Awards page and subscribe to our email newsletter.

Meet Dr. Srijan Kumar, Assistant Professor at the School of Computational Science and Engineering, College of Computing at Georgia Institute of Technology. As Director of the Computational Data Science Lab for the Web and Social Media (CLAWS), Srijan centers his research on developing data mining, AI, and machine learning solutions to combat threats to the safety, equity, and well-being of online users and communities. “Bad actors and dangerous content, such as misinformation and hate speech, pose some of the biggest threats to public health, democracy, science, and society,” says Srijan. “My team aims to develop solutions that detect, predict, and mitigate cyber harm.”

In 2020, Meta funded a research proposal in applied statistics authored by Srijan and his colleague Duen Horng “Polo” Chau. “Meta is well-known for its research in AI, machine learning, and data science, and the Core Data Science (CDS) team is acclaimed for innovative work,” Srijan says about his decision to apply for the RFP. His RFP application focused on better understanding how harm detection systems are vulnerable to being thwarted by bad actors. “Our guiding question was, ‘Can we identify the vulnerabilities of these detection systems in a systematic way?’ We then sought to build solutions to overcome these vulnerabilities.”

I’m passionate about solving the challenges society faces due to threats on the web and social media platforms. As a computer scientist, I bring my expertise in AI, machine learning, and data science to innovate accurate and robust detection, prediction, and mitigation methods to reduce cyber harms.

Since receiving funds from the RFP, Srijan and his team have produced several papers aimed at answering this question. “We found most detection systems can be fooled to think bad actors are real users when they change their behavior,” Srijan explains. “We developed adversarial machine learning techniques to identify these behaviors and vulnerabilities in detection models, then created solutions and robust models in a sandbox environment that increase the cost for bad actors to adapt their behaviors.”

Ban evasion activity is common across social media sites and his research includes looking at how adversaries create new accounts to evade detection. “Ban evasion is one of the biggest challenges to web safety and integrity because it renders detection models useless,” he says. Analyzing all of Wikipedia’s content, Srijan and his team created a model to detect ban evasion behaviors and help community moderators become more efficient. They are now working with Wikipedia to deploy it on its website.

Srijan has maintained a close collaboration with CDS since he began working with the team in 2020. In 2021, he and Meta co-organized the Misinformation and Misbehavior Mining on the Web workshop at the annual Knowledge Discovery and Data Mining (KDD) conference. “I’m very thankful to Meta for amplifying my research,” he said. “Their funding kickstarted our work on Trustworthy AI model fortification, a major pillar in the life cycle of detecting cyber harm. This is now a central theme for my team.”

Srijan’s research aims to help people experience safer online environments. Whether collaborating with the Centers for Disease Control and Prevention (CDC) to understand how misinformation causes physical or cyber harassment, along with exacerbating anxiety in racial and gender minorities, or partnering with industry to consider how systems can be designed to be more resilient and responsive to harm, his work focuses on whole-of-society solutions. “This is made possible through collaborating with practitioners, government agencies, and platforms like Meta,” he says. “I’m looking forward to even more collaboration ahead.”