Application Deadline: November 4th, 2021
The Mozilla Technology Fund (MTF) supports open source technologists whose work furthers promising approaches to solving pressing internet health issues.
The Mozilla Technology Fund (MTF) is a grantmaking program which funds technical projects in specific areas that relate to internet health. The Mozilla Technology Fund’s first call for proposals is for projects which can expose elements of how artificial intelligence (AI) systems work, in order to reduce bias in and increase the transparency of AI systems. We hope to fund projects which can empower watchdogs (including technologists and journalists) to hold the designers of AI systems accountable and to give them tools to reduce bias and increase transparency.
Through the MTF: Bias and Transparency in AI Awards, we will provide awards of up to $50,000 USD each to open source technology projects which can make artificial intelligence systems more trustworthy, by addressing bias and/or increasing transparency and accountability. We imagine these projects might demystify the inner workings of existing AI systems and make the design of their algorithms transparent, by improving the explainability, interpretability and predictability of their outputs. The projects we aim to fund will provide the tools for current AI systems to better serve the interests of people, and/or imagine new ways of building and training AI systems in the future.
These awards are open to all applicants regardless of geographic location or institutional affiliation, except where legally prohibited. However, Mozilla is especially interested in receiving applications from members of the Global Majority or Global South; Black, Indigenous, and other People of Color; women, transgender, non-binary and/or gender-diverse applicants; migrant and diasporic communities; and/or persons coming from climate displaced/impacted communities, etc
- Projects that help expose elements of how AI systems in consumer technology products work, the bias that may be inherent in them and/or how to mitigate the bias in these systems
- Utilities that help developers understand and identify bias when building datasets and AI systems
- Components that allow developers to provide more transparency to users on the inner workings of AI systems
- Tools to help identify, understand and mitigate the bias inherent in publicly-available datasets
- Have a product or working prototype in hand—projects which have not moved beyond the idea stage will not be considered
- Already have a core team in place to support the development of the project (this team might include software developers working in close collaboration with AI researchers, designers, product/project managers and subject matter experts)
- Embrace openness, transparency, and community stewardship as methodology
- Make their work available under an open-source license
- Applications will be accepted for a period of four weeks and will then be reviewed by a committee of experts, which will make final funding decisions and allocate awards of up to $50,000 USD each out of a total pool of $300,000 USD.
- Applicants can expect to hear back within eight weeks of submitting an application; please email [email protected] with any questions.
- Applications will close on November 4th, 2021 at 12pm Eastern Time (UTC-4).
For More Information: