The rapid development of artificial intelligence represents a major transformation, affecting economies, public services, and everyday life. Yet AI’s current trajectory faces significant economic, environmental, and accessibility challenges. Developing resilient, sustainable, and efficient AI systems is essential to ensure that AI benefits all people while remaining aligned with environmental and development goals.
Model compression is one important pathway toward more resilient AI. By reducing model size and complexity in a strategic way, compression techniques can enable more energy-efficient systems while maintaining strong performance. This makes AI more deployable in real-world settings, especially where computing resources are limited.
Resilient AI is also inclusive AI. When models require enormous computational resources, participation becomes limited to a small number of actors. By contrast, efficient and purpose-built systems expand access to innovation, enabling researchers, public institutions, start-ups, and communities worldwide to develop and deploy AI solutions. Smarter AI is not necessarily the biggest model. It is the one that delivers meaningful results while remaining accessible, adaptable, and sustainable.
COMPETITION OVERVIEW
The governments of France and India, UNESCO, and the Sustainable AI Coalition are launching the Resilient AI Challenge, a key outcome of the Resilience, Innovation & Efficiency working group of the India AI Impact Summit. The Challenge is an open international competition aimed at engaging researchers, companies, and technology innovators from around the world.
The Challenge focuses on advancing practical solutions for AI model compression. Participants are invited to develop compressed versions of selected open-source or open-weight AI base models, aiming to achieve the best balance between model accuracy and energy gains.
Many AI models are now open source. However, organizations and researchers with limited computing resources often struggle to deploy them. Making models more efficient through compression can broaden access to AI while reducing its environmental footprint.
This initiative builds on the UNESCO and UCL publication “Smarter, Smaller Stronger, Resource Efficient AI and the Future of Digital Transformation” published in July 2025. The report demonstrates that relatively small design choices in how models are built and used can significantly reduce energy consumption without compromising performance. Model compression is one of the key approaches explored.
The Challenge is open to the research community, companies, start-ups, and innovators who aim to advance the field of model efficiency.
Teams may compete in one or more of the following categories. Each category is based on a specific AI model selected in collaboration with technology partners:
- Audio-to-text using Voxtral Realtime by Mistral AI
- Image-to-text using Gemma 3n by Google
- Text-to-text using Sarvam-30b by Sarvam
Each category focuses on a distinct real-world use case and model type to explore different compression techniques.
The winner of each category will be the team that delivers the most energy-efficient compressed version of the baseline model while meeting a defined threshold for accuracy.
TIMELINE
- This challenge was officially launched at the AI Impact Summit in India on February 20, 2026.
- Team registration is open starting February 20 via the challenge webpage (see below).
- The competition runs from March 23 to June 15, 2026.
- Technical leads from Mistral AI, Google, and Sarvam will present the selected base models and outline technical specifications during three kick-off meetings :
Text-to-text (Sarvam-30b by Sarvam): Download the model on AIKosk AI or Hugging Face and get started!
March 23, 2026, 8:30am-9:30am (ET) / 1:30pm-2:30pm (CET) / 6pm-7pm (IST)
Audio-to-text (Voxtral Realtime by Mistral AI): Download the model on Hugging Face and get started!
March 25, 2026, 9am-10am (PST) / 5pm-6pm (CET) / 9:30pm-10:30pm (IST)
Image-to-text (Gemma 3n by Google): Download the model on AIKosh or Hugging Face and get started!
The kickoff has been postponed. We will share the new date and time as soon as possible.
If you want to attend the live kick-off meetings, please make sure you have registered to the Challenge the day before at the latest. The recording will be made available for those who registered afterwards or for those who could not attend.
- A mid-challenge online Q and A session will be organized with each model provider.
- Assessment will be done through 2 rounds:
- Round 1 : Intermediate compressed models will be evaluated between April 27 and May 4 with intermediate leaderboarder published
- Round 2: Final compressed models submitted by June 15 and final assessment between June 15 and June 25.
- The winning teams will be announced at ITU’s annual AI for Good Summit in Geneva, Switzerland, on July 7 to 10, 2026.
WHY PARTICIPATE IN THE CHALLENGE
By participating in this international competition focused on AI model efficiency, you contribute to shaping a more accessible and sustainable future for artificial intelligence. The Challenge will introduce a common threshold for accuracy, and will rank the models based on energy gains, helping to establish new mindsets for resilient AI. Participants will gain visibility within the global AI community and contribute to practical, real-world solutions aligned with environmental and development objectives.
Prizes:
Winning teams will receive:
- International recognition through UNESCO, AI for Good Summit and the Sustainable AI Coalition
- Opportunities to present their work to the global AI community
- Direct engagement with participating AI models providers and technical teams
- Coaching hours with Capgemini Invent Team and Pruna AI
- Premium access to compute capacity on the AIKosh platform
- Additional prizes to be announced specific to each category
HOW TO TAKE PART
Who can participate ?
The Challenge is open to:
- Researchers from universities, or research institutions
- Companies and start-ups
- Professionals working in the private or public sector
- Non-profit organizations
- Students enrolled in universities or research programs
Participants may compete in one or multiple category.
Challenges will be organized into 3 categories, to reflect various use-cases. The three categories are :
- Audio-to-text using Voxtral Realtime by Mistral AI
- Image-to-text using Gemma-3n by Google
- Text-to-text using Sarvam 30b by Sarvam AI
HOW THE SOLUTIONS WILL BE EVALUATED
Submissions will be evaluated by the technical team of Challenge Organizers, which will assess both the accuracy and energy gains of the compressed models. To guarantee fairness, the energy consumption of inferences will be measured under uniform conditions, using identical hardware for all evaluations.
PLEASE REFER TO OUR FAQ FOR TECHNICAL QUESTIONS RELATED TO ASSESSMENT : HERE
ORGANIZED BY
WITH THE SUPPORT OF
CONTACT US








