By: Samantha Hautea, GREAT Communications Specialist
How do you ensure that gender training curricula and trainers are really delivering greater adoption of gender-responsive research, and in a truly meaningful way? It’s a question that many projects face as they respond to a growing awareness of the central role that gender plays in shaping development outcomes, and it’s a question that faced the GREAT team at the inception of the project.
One of the key individuals in designing the GREAT program was Deborah Rubin, leading expert on gender and social systems analysis and Co-Director of the consulting firm Cultural Practice, LLC. For this second part of our series on measuring impact (see the first part, our interview with Vicki Wilde from the Bill & Melinda Gates Foundation, here), we spoke with Rubin to learn her thoughts in measuring impact for capacity-building projects, and what lessons she’s learned from GREAT and throughout her career.
Participants and trainers go through an exercise in gender-based constraints during Week 1
of the GREAT legume breeding course.
“Achievements resulting from capacity building can most certainly be measured. And they have been measured by any number of studies measuring results from training programs for smallholder farmers on improving agricultural practices as well as programs comparable to the GREAT program such as fellowships and other types of training for students,” Rubin explained. The GREAT program incorporated project-level monitoring and evaluation activities with their partner, Aline Impact, Ltd. USAID-funded training and fellowship programs for agricultural researchers have long been the subject of evaluations, but until recently relatively few evaluations have looked specifically at gender-related training.
Such measurements, Rubin said, are important not only to donors, but to the recipients of the training and anyone in the development community who is concerned about achieving gender-transformative change. With often limited resources, institutions and potential course participants want to ensure they aren’t wasting their time with a program that does not deliver real results.
“Good training programs can be a way to achieve greater equality and inclusivity. Lousy training programs, however, turn everyone off, reduce opportunities for others to offer trainings, and can create a pool of underqualified “graduates” who then misapply their knowledge and underperform,” Rubin cautioned. “Un- or underqualified ‘gender experts’ can do lots of harm and make it very difficult for good gender analysts to gain traction at a later date.”
So how can one assess if their program really is delivering on developing capacity? Rubin said she believed there are two fundamental issues that must be addressed when setting out to measure the effect of a program on capacity building.
The first issue is establishing a baseline for measurement. That either involves doing a baseline survey of participants or other analysis or figuring out what data can be used as a baseline after the fact. Establishing a baseline at the beginning of a project provides a point of comparison to see what changes have actually taken place. The second issue to address is identifying what are the measurable results that can reasonably be attributed to the training program. Many factors can affect outcomes, which may be direct or indirect. But it is important not to lose sight of the program’s goal, and make a clear connection between program activities and results.
Rubin recommends three broad categories that can be used to measure the impact of gender trainings: attention to gender being integrated into research proposals; changes reflected in publications; and metrics to assess institutional change.
Participants and trainers go through an exercise in gender-based constraints during Week 1
of the GREAT legume breeding course.
Attention to gender integrated into research proposals might include:
- In-depth treatment of gender issues that explains, e.g., the specific existing gender disparities that influence the problem on which the research is focused
- An explanation of how the research results will help to reduce identified gender-based constraints related to the research focus – in the GREAT training, we talked about this as identifying the expected or desired gender equality outcome.
- Clarity in the choice and description of methods to include collection and analysis of sex-disaggregated data
- Careful design of research proposal monitoring and evaluation indicators to be able to track change in both women’s and men’s behaviors (e.g., yields, use of inputs, participation in trainings) and the difference between them
Success might be reflected in publications through:
- Increased numbers of publications by participants that include attention to gender
- More women publishing papers, even if they are not specifically on gender issues
- Improved sophistication demonstrated in the quality of the gender analysis in the publication
Examples of institutional change might include:
- Gender-responsive training participants getting promotions
- Adoption of new policies that reflect the importance of attention to gender – maybe new guidance
Lessons from the GREAT experience
Using GREAT as an example of what could be improved, Rubin explained that she was not sufficiently attuned to the evaluation needs at the start of the program. Her initial questionnaire for participants was more for the purpose of developing training materials, not establishing a baseline. Thus, efforts to measure what participants in the first cohort actually learned from the course were limited; this has changed as Aline has implemented a better method for collecting baseline information from later trainings.
“In hindsight, a better approach than my assessment questionnaire would have been if I had designed an initial questionnaire asking people about what they DID in applying knowledge about gender integration in their own projects and then to track whether their actions in their work actually changed as a result of the training,” Rubin said. “Otherwise, you have only self reports of how useful the knowledge is or you can show that classroom learning was successful by giving a test, much like in a university classroom, but you don’t know if having acquired the knowledge changed the behavior.”
Another option, Rubin added, would be to follow up with participants about changes in their institutions or in their jobs and see if they get into positions in which the knowledge from the course can be well applied.