Frontier AI Safety — Why It Matters and Why You Should Care
Title of the Talk: Frontier AI Safety — Why It Matters and Why You Should Care
Speaker: Hari Shrawgi
Host Faculty: Dr.Sandipan Dandapat
Date: 11th March 2026
Time: 9 am - 10 am
Abstract As AI systems rapidly move toward frontier‑level capabilities, questions of safety, reliability, and control are becoming central to the future of the field. This talk introduces the core challenges of frontier AI safety and explains why they matter not just for researchers, but for anyone building or studying advanced AI systems today.
Along the way, I’ll share why I believe AI safety is one of the most important—and intellectually exciting—areas students can work on. The talk will also include practical guidance on the skills students should be developing to succeed in top industrial AI labs more broadly. While these skills apply across AI, I’ll make the case that applying them to safety is both impactful and deeply rewarding.
Bio Hari Shrawgi is a Member of the Technical Staff at Microsoft, where he spends his days thinking about how to make advanced AI systems safer—and his evenings thinking about why that work matters so much. He is deeply passionate about AI safety and believes it’s the most important thing we need to get right as AI systems become increasingly powerful.
Hari is a gold medallist from the Indian Institute of Science (IISc) in Artificial Intelligence and has nearly five years of professional experience at Microsoft. He enjoys working at the intersection of rigorous technical research and big‑picture questions about responsibility, robustness, and alignment—and is a strong believer that while AI safety is serious business, doing it well is one of the most intellectually challenging pursuits