好色先生TV

好色先生TV
AI Measurement Science & Engineering (AIMSEC)

CMU-NIST Cooperative Research Center

📅 Tuesday, April 14th | 12:30–1:30pm ET 

📍GHC 6501 or Zoom 

Lunch provided for in-person attendees

Please register here if you plan to attend the talk: 
You can also register to meet 1-on-1 with Dr. Gokul Krishnan here:  
Title: From Indigenous Benchmarks to Grounded Reasoning: Towards Evaluating and Building Trustworthy AI Models

Abstract: As Large Language Models (LLMs) are increasingly deployed in high-stakes domains such as healthcare, law, and education, the need for robust, transparent, and fair systems is paramount. While fair AI systems are desirable to ensure people or communities  are not left out from AI solutions, explainable AI systems that provide understandable explanations are important to improve trustworthiness. While significant progress has been made in assessing Responsible AI aspects such as fairness/bias, existing frameworks often rely on Western-centric benchmarks and fail to capture complex, culturally specific sociolinguistic nuances in diverse regions such as India. Furthermore, off-the-shelf models frequently generate appealing natural language explanations that sound plausible (and often convincing!) but lack faithfulness to the working of the models.
This talk presents a series of approaches moving from the rigorous evaluation of AI to the possibility of construction of trustworthy models. First, I will present our work on region based bias evaluation techniques, highlighting the IndiCASA framework's use of contrastive embeddings to capture fine-grained demographic biases in the Indian context. Second, I will present our work on exposing and quantifying geographic and socioeconomic disparities in LLM-based educational recommenders. Next, I will introduce LExT, a novel metric that jointly quantifies the aspects of plausibility and faithfulness of natural language explanations. Finally, I will briefly discuss our ongoing attempts to bridge the gap between assessment and generation of natural language explanations. I also plan to give a glimpse of other ongoing technical/policy research at the Centre for Responsible AI (CeRAI) at IIT Madras.

Short Bio: Dr. Gokul S Krishnan is currently a Senior Research Scientist in the Centre for Responsible AI (CeRAI), IIT Madras. Prior to this, he was a Guest Researcher at National Institute of Standards and Technology (NIST), Maryland, USA; and a Research Scientist & Postdoctoral Fellow at Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT Madras. His research interests include Responsible AI, Machine/Deep Learning, Natural Language Processing, Healthcare Informatics, and Web Semantics. His recent research focus has been on evaluating and improving Fairness & Explainability in Language Models and bringing the Indian context into AI evaluation. He earned his Ph.D. in Information Technology from National Institute of Technology Karnataka, Surathkal, specializing in Healthcare Informatics. He received his Masters degree in Computer Science and Engineering from VIT University, Vellore, Tamil Nadu and Bachelors degree in Computer Science and Engineering from College of Engineering Chengannur, Cochin University of Science and Technology, Kerala. He has several international journals and conference publications to his name and is also a reviewer for several SCI indexed journals. He is passionate about web/mobile app development and is a python language enthusiast as well. He enjoys singing, travelling and photography in his spare time.