Madhulika Srikumar heads the AI safety program at the Partnership on AI. With close to a decade of experience steering global initiatives in technology governance, Madhu analyzes emerging technologies and policy implications to enable pragmatic, collective action on oversight. Her expertise spans policy research and advocacy, strategic foresight and research program management in order to effect real-world change in industry practice and inform government regulation.

As the Head of AI Safety at PAI, Madhu leads applied policy research initiatives. Collaborating with stakeholders from PAI’s diverse coalition, she has led work on norms for responsible publication of AI research and industry guidelines for safe deployment of generative AI models. In 2023, Madhu developed and launched PAI's Guidance for Safe Foundation Model Deployment, collaboratively created with experts from Ada Lovelace Institute, Apple, Google DeepMind, GovAI and 40+ other institutions. The guidance offers a customized approach to scaling oversight and safety practices based on model capability and release type, helping mitigate socio-economic safety risks inclusive of economic impacts, psychological harms and malicious uses. Notably, Meta, Microsoft, and Google have publicly highlighted and endorsed the Guidance, driving responsible research and innovation practices. 

Madhu’s research on societal impacts of technologies aims to operationalize ethical ideals into concrete actions, resources and guidelines. With transnational expertise spanning social media governance, privacy, and international cyber stability, Madhu previously served as a public interest technology fellow at New America in Washington DC and Associate Fellow at the Observer Research Foundation in New Delhi. 

During this time, her key contribution involved leading primary research on foreign law enforcement access to digital evidence from major tech companies in the US. Funded by the Ford and Hewlett Foundations, she liaised with state officials, industry, and law enforcement, to provide concrete recommendations for officials in India and the US to resolve the gridlock around cross-border data sharing in a privacy-preserving manner. 

Madhu's research has been featured and endorsed by Nature Machine Intelligence, VentureBeat and Atlantic Council, among others. She has advised companies, governments, and philanthropies on developing guardrails for emerging capabilities such as the UK’s DSIT and the Office of the US Vice President. Madhu has presented findings from her consensus-building work at venues like the Global Partnership on AI, OECD, Global Philanthropy Forum, and conferences including the Association for Computational Linguistics (ACL).

Madhu currently serves on the Advisory Board of the Centre for the Study of Existential Risks (CSER) at University of Cambridge, where she is also a Research Affiliate. Trained as a lawyer, she holds a graduate degree (LL.M.) from Harvard Law School, where she studied as an Inlaks Foundation Scholar. While at Harvard Law, she was also the recipient of the Cravath International Fellowship and Project on the Foundations of Private Law Fellowship. She received her BA LL.B. (Hons.) from Gujarat National Law University in India .

She is based in San Francisco. 

ABOUT MADHU