IndiaAI has announced the selection of five cutting-edge projects aimed at strengthening the nation’s efforts in developing safe, transparent, and trustworthy artificial intelligence systems. The projects, chosen under the ‘Safe & Trusted AI’ pillar of the IndiaAI Mission, focus on deepfake detection, bias mitigation, and AI penetration testing.
The initiative follows the second round of Expressions of Interest (EoI) launched on December 10, 2024, which received over 400 proposals from academic institutions, start-ups, research bodies, and civil society organizations. A multi-stakeholder committee evaluated the submissions and finalized five projects that will translate India’s vision of “Safe & Trusted AI” into actionable solutions.
The selected projects include:
Deepfake Detection:
Saakshya: Multi-Agent, RAG-Enhanced Framework for Deepfake Detection and Governance – by IIT Jodhpur and IIT Madras.
AI Vishleshak: Enhancing Audio-Visual Deepfake and Signature Forgery Detection – by IIT Mandi and the Directorate of Forensic Services, Himachal Pradesh.
Real-Time Voice Deepfake Detection System – by IIT Kharagpur.
Bias Mitigation:
Evaluating Gender Bias in Agriculture LLMs and Creating Digital Public Goods for Fair Data Work – by Digital Futures Lab and Karya.
Penetration Testing & Evaluation:
Anvil: Penetration Testing and Evaluation Tool for LLM and Generative AI – by Globals ITES Pvt Ltd and IIIT Dharwad.
According to IndiaAI, these initiatives will enhance India’s AI governance ecosystem by enabling real-time deepfake detection, advancing forensic capabilities, addressing gender bias in AI models, and building tools for testing the robustness of generative AI systems.
By bringing together leading academic institutions, industry experts, and civil society partners, the IndiaAI Mission aims to foster innovation and ethical AI practices, ensuring that India’s AI ecosystem remains inclusive, secure, and globally competitive.