TLDR: Protocol Labs’ Network Funding team is releasing a whitepaper on Impact Evaluators, a funding mechanism designed for nontraditional projects with high uncertainty and high upside. Our goal is to add structure to the ongoing dialogue and share practical implementation advice based on a year of experience. The paper is available here.
Funding vital public goods is hard, and doing so more effectively has massive economic potential. At Protocol Labs, we recognize this, and we’ve dedicated our Network Funding team to pushing forward innovation in how public goods and commons are funded, allocated, and sustained. This has led us to explore mechanisms we call “Impact Evaluators”, a term coined by Juan Benet & Evan Miyazono of Protocol Labs, that builds on concepts from blockchain incentive mechanisms and retroactive funding. Impact Evaluators can provide incentives or market mechanisms for highly effective “impact funding”. Using Impact Evaluators, groups of agents are able to work towards objectives by assessing the impact against those objectives and retrospectively rewarding valuable work. When implemented effectively, an Impact Evaluator guides potential contributors toward particular objectives.
Our team at Protocol Labs believes that this mechanism offers a powerful way to fund important public goods with high upside and high uncertainty. In the past 12 months, we have 1) pushed the theory of Impact Evaluators forward and 2) experimented with several smaller-scale Impact Evaluator tests:
- Filecoin Quadratic Voting PR
- Documentation Impact Evaluator
- IPFS Impact Evaluation
- FVM Impact Evaluator & Builders Leaderboard
Due to the interest generated by events (Funding the Commons, CryptoEconDay) and the community at large, we’ve seen an uptake in projects experimenting with implementing Impact Evaluators. To share our learnings, we’ve written a whitepaper on Generalized Impact Evaluators. The purpose of this paper is to accelerate the development of Impact Evaluator mechanisms by providing:
- Standard language to define and construct Impact Evaluators
- A framework to efficiently implement Impact Evaluators
As Protocol Labs scales up its deployment of Impact Evaluators, we will build a more robust theoretical basis, which we will publish iteratively. In the meantime, please see the bottom of this article for additional resources related to Impact Evaluators.
This is part of an open source effort - if you would like to contribute, join the Impact Evaluator Discord or email us at network-goods@protocol.ai.
Further materials: Vitalk Buterin, Optimism - Retroactive Public Goods funding Juan Benet - Introduction to Impact Evaluators Juan Benet - Impact Evaluator Design Evan Miyazono - Impact Evaluators Matthew Frehlich - Impact Evaluators: Existential Challenges and Opportunities