AI Safety India Community

420 million AI users.
5.4 million developers.
Fewer than 50
working to make it safe.

This is where it starts. Training, research, and a cohort of people already doing this work — built for India, connected to the global field.

Join This Week's Discussion See all programs ↓
30
Researchers trained
in 2025
3
University clubs
seeded
1
Cohort
completed
Start working in AI Safety
Deep dive
Technical AI Safety
Unsafe AI is a technical problem. Learn how alignment fails, where the gaps are, and where your skills make the biggest difference.
10 weeks · Cohort-based
Deep dive
AI Governance & Policy
AI is being deployed faster than the rules governing it. Learn how policy is made, where it's failing, and where you have leverage to change it.
10 weeks · Cohort-based
Do the work
Technical Research
Technical Safety Research
Work on unsolved safety problems grounded in India's AI infrastructure. Exit with a paper or benchmark contribution.
12 weeks · Cohort-based · Stipend supported
Governance Research
Governance & Policy Research
Work on real governance gaps — welfare systems, public AI infrastructure, regulatory frameworks. Exit with a policy brief for MeitY or NITI Aayog.
12 weeks · Cohort-based · Stipend supported
Bridge Track
AI Policy Engineering
Build evaluation tools, audit frameworks, and policy-relevant benchmarks. Exit with work that bridges both worlds.
12 weeks · Cohort-based · Stipend supported
What's happening now

Live view of current programs, open applications, and upcoming sessions. Updated in real time.

People who started here
"

I was exploring AI Safety on my own. It was scattered. The cohort fixed that — structure, consistency, and people who actually took it seriously. That combination changed how I approached it.

Prasanna
Now at an Impact-Aligned Startup
"

Coming from eight years in public health, I had seen how poorly designed systems cause unintentional harm — especially in low-resource settings. When I started exploring AI, the parallels were immediate. The AI Safety India cohort gave me the structured foundation I was missing. It connected global AI risks to local realities and made clear that AI safety isn't a conversation reserved for advanced economies. That clarity pushed me from interest to responsibility. I've since co-founded Ethicore AI Uganda — focused on bringing AI safety and governance into conversations with policymakers, universities, and youth in Uganda.

Sylvia
Co-Founder, Ethicore AI Uganda · AI Governance & Policy
"

The weekly Wednesday sessions were highly interactive, giving us a platform to share ideas freely and learn from each other. The live hands-on sessions were particularly valuable, as they allowed us to apply concepts in real-time. I also really appreciated the global perspective — discussing AI challenges with participants from different countries broadened my understanding of how safety is implemented across diverse contexts. Above all, the mentorship made the space feel friendly, approachable, and truly collaborative.

Chekuri Yukthamukhi
Student
"

AI Safety India was my entry point into AI safety. The cohort's facilitation model — not lecture-based — built real critical thinking rather than surface familiarity. I now apply this lens directly in my work on agentic and RPA automations, and through my role at the UNESCO Women for Ethical AI South Asia Chapter.

Neha
PM/BA · UNESCO Women for Ethical AI, South Asia
Stay close to the work
Expression of Interest
Start here.

Whether you want to learn, research, fund, or build — tell us who you are and what brought you here.

Sent to contact@aisafetyindia.org · We reply within 48 hours.
Received. We'll be in touch within 48 hours.