Yesterday, my colleague Lina Bankert wrote about three new federal grant competitions that have just been posted. Those who are new to federal grant competitions may find the evaluation requirements and research-design options (explained below) overwhelming. Federal grant applications typically require:
- An evidence-based rationale for the proposed project’s approach, such as a logic model
- Citations of prior research that support key components of a project’s design and meet specific thresholds for rigor specified by the What Works Clearinghouse
- Expected outcomes and how applicants will measure those with valid and reliable instruments
- Explanation of how the proposed project will be studied to understand its impact
Proposals may be scored by two kinds of individuals: reviewers with programmatic expertise and reviewers with evaluation expertise. Sections of the grant are allocated a certain number of points, all of which total to a final score that drives which proposals receive awards. The evaluation section of these proposals can represent up to 25% of the total points awarded to applicants, so having a strong one can make or break an application.Â
Writing these sections requires a sophisticated understanding of research methodology and analytical techniques in order to tie the application together with a consistent and compelling evidence base. Our evaluation team at Bellwether has partnered with a number of organizations to help them design programmatic logic models, shore up their evidence base, and write evaluation plans that have contributed to winning applications to the tune of about $60 million. This includes three recent partnerships with Chicago International Charter School, Citizens of the World Charter Schools, and Grimmway Schools — all winners in the latest round of Charter School Program (CSP) funding for replication and expansion of successful charter networks.
Through the years, we’ve noted that some applicants focus so much on writing about their program that they practically stumble upon the evaluation requirement, making for a mad dash before the submission deadline. Here’s why savvy applicants engage evaluators early on:
- Evaluators can help ensure grantees are applying to the right grant. For example, the Education Innovation and Research (EIR) grant has three funding levels (Early-Phase, Mid-Level, Expansion) that correspond to different levels of evidence. Grantees often need help discerning if they have enough of the right kind of evidence to apply for this grant and if so, at which level they should focus. Applying for these grants is a huge time commitment, so evaluators can help vet whether the prospective grantee has what it takes to proceed.
- Evaluators can find, and help fix, gaps in a program’s logical chain of causes and effects. Evaluators hear program descriptions and start mentally sliding the different components into logic models — they can’t seem to help it. This allows evaluators to iterate on and improve the logic model by ensuring it is informed by evidence and easily understood by an unfamiliar audience.Â
- Evaluators ensure proposed outcomes for the program can be measured with confidence. Not all metrics are alike, and evaluators are trained to understand why and how various tools are used, how well they perform, and what limits they possess. Evaluators can help mitigate the risk associated with measurement errors by introducing multiple measures.
- Evaluators are obsessed with alignment, and that’s a good thing. They can help ensure past evidence that supports the program’s design connects to proposed strategies and outcomes. Evaluators can then ensure sufficient data is collected during a program’s implementation to enable sophisticated and timely feedback loops on progress being made and challenges being faced.
To learn more about Bellwether’s evaluation team and how we can partner with you on grant applications or other work, email our team at contactus@bellwether.org.