This is where it starts. Training, research, and a cohort of people already doing this work — built for India, connected to the global field.
Live view of current programs, open applications, and upcoming sessions. Updated in real time.
I was exploring AI Safety on my own. It was scattered. The cohort fixed that — structure, consistency, and people who actually took it seriously. That combination changed how I approached it.
Coming from eight years in public health, I had seen how poorly designed systems cause unintentional harm — especially in low-resource settings. When I started exploring AI, the parallels were immediate. The AI Safety India cohort gave me the structured foundation I was missing. It connected global AI risks to local realities and made clear that AI safety isn't a conversation reserved for advanced economies. That clarity pushed me from interest to responsibility. I've since co-founded Ethicore AI Uganda — focused on bringing AI safety and governance into conversations with policymakers, universities, and youth in Uganda.
The weekly Wednesday sessions were highly interactive, giving us a platform to share ideas freely and learn from each other. The live hands-on sessions were particularly valuable, as they allowed us to apply concepts in real-time. I also really appreciated the global perspective — discussing AI challenges with participants from different countries broadened my understanding of how safety is implemented across diverse contexts. Above all, the mentorship made the space feel friendly, approachable, and truly collaborative.
AI Safety India was my entry point into AI safety. The cohort's facilitation model — not lecture-based — built real critical thinking rather than surface familiarity. I now apply this lens directly in my work on agentic and RPA automations, and through my role at the UNESCO Women for Ethical AI South Asia Chapter.
Whether you want to learn, research, fund, or build — tell us who you are and what brought you here.