Reshmi Ghosh
Boosting AI鈥檚 integrity
Senior Applied Scientist and Technical Lead, Microsoft
Possession of artificial intelligence tools by villainous sleeper agents may sound like a movie plot, but it鈥檚 a real possibility that people like data scientist Reshmi Ghosh (ENG 2017, 2020, 2021) work to prevent every day.
鈥淪ecurity and safety risks around generative AI systems for search and agents are continuously evolving,鈥 she says. 鈥淲e develop safety filters to detect and prevent data exfiltration attempts, which happen when malicious instructions are hidden in a third-party document that鈥檚 uploaded with a prompt.鈥
Of course, realized this Trojan horse vulnerability long ago. As part of the MSAI Responsible AI team, Reshmi pioneered Azure Prompt Shields, a machine-learning model supporting safe and reliable prompting.
The safety models that Reshmi built run automatically in the backend, much like a guardian angel and bouncer within the AI inner monologue. Should a malicious prompt slip through, it鈥檚 recognized, neutralized and tossed out before any mischief happens.
The benefit to millions of Microsoft users is knowing that information from emails, drives and documents can鈥檛 be mined and redirected with these protections in place, thanks to data integrity.
Reshmi鈥檚 job is among the most coveted in the AI space. She joined Microsoft as one of about 20 members of the company鈥檚 annual AI Responsibility incubator, selected from a pool of almost 20,000 applicants. After rotations throughout the company, she landed in her present role focused on improving Copilot鈥檚 search and conversation capabilities.
She also mentors students in the Boston area who study AI, adding industry perspective to what they鈥檙e learning from professors.
鈥淚 love to see students grow as I have been a student for so long, so it鈥檚 a community that鈥檚 important to me and I want to see AI learning thrive,鈥 she says.
Story by Elizabeth Speed