Jo BNE  

Author: Jo Garner

 

 

At the 41st FIA Conference in Sydney last week, our Communications and Evaluation Strategist, Kate Sunners and I facilitated a Masterclass, “Grants Success-The essentials of performance management, relationship management and reporting.”  Accompanied by funder friends, Caitriona Fay, National Manager Philanthropy & Non Profit Services at Perpetual and Anne Long, CEO, Greater Charitable Foundation, we spent the day exploring:

– Current funder expectations with regards to performance and outcomes measurement and reporting

– Is your organisation ready and resourced for monitoring and evaluation

– What your evaluation should be measuring and tools that you can implement right away

– A case study on best practice relationship management, evaluation and reporting

Here is a wrap- up of the key messages from the session.

1. Be clear on the difference between monitoring and evaluation. Monitoring is the collection and analysis of information about a project or programme, undertaken while the project/ programme is ongoing.

Evaluation is the assessment of results of (or a phase of) programs (outcomes, efficiency, effectiveness, impact) which provides insight about the value of a program or project.

2. Both government and philanthropic investors are placing increasing importance on outcomes and impact measurement and reporting. We have seen the establishment of the government’sFunderswantOutcomesImage Principles for Social Impact Investing, Social Benefit Bonds created by a number of state governments and increasingly, philanthropic funders are emphasizing the importance, from their perspective, of measuring outcomes and impact.

The Greater Charitable Foundation, for example, “Funds practical, life-changing initiatives, which directly support families and communities.” In addition to their assessment criteria focusing on Organisational Governance, Capacity and Capability and Strategy, they also focus on Outcomes and Impact, “The proposed program needs to display a sound rationale and evidence base and demonstrate the capacity to deliver positive, long-term change for participants. There must be evidence of realistic, measurable and achievable outcomes and you need to outline how you intend to report, evaluate and articulate a social return to the beneficiary families and/or communities.”

3. Funders want to know:

a. What will you tell them?

b. How will they know it’s true?

c. How will data be presented?

d. How can they use the data to tell their story and measure their own effectiveness as grant-makers?  But remember: Requirements differ – so make sure you give them what they ask you for!

4. You also need to ask yourself:

a. What data are you capturing and storing?

b. Is your data being stored safely?

c. Are you collecting your data ethically?

d. Do you know what your data is telling you and are you using that new knowledge?

5. Evaluation framework design needs to be built into the program development stage! All too often we hear that the funder report has been delayed as the data has not yet been collected.  Evaluation frameworks and data capture methods need to be embedded at the project’s outset. (NSW Government Evaluation Toolkit)

6. There is a disconnect between what philanthropists will fund and what NGOs think they’ll fund. Research from Melbourne Business School indicates that according to philanthropists, 52% say there is a medium to high likelihood of philanthropy providing support for evaluation.  But according to NFPs, there is a 14% likelihood of philanthropy providing financial support for evaluation.  Important message here: TALK TO THE FUNDERS!

7. Increasingly, funders (like Perpetual), want to see that organisations are governed by a pivoting board – those that KNOW if they are doing a good job or a bad job and respond accordingly.  Funders need to understand that you are a good investment. “We are investing in organisations that know how to measure.” According to Caitriona Fay, this is a key consideration for Perpetual.

8. Be clear about your Key Evaluation Questions. What do you want to know? For example,

a. “How well did the program work?

b. Did the program produce or contribute to the intended outcomes in the short, medium and long term?

c. What is the most cost-effective option?

9. And finally, how do we define success?

a. Define the program’s objectives.

b. What are the key indicators that will tell you your program has met these objectives?

c. Choose SMART indicators– Specific, Measurable, Attainable, Realistic, Timely. Benchmarking: what does success look like to other organisations working in similar service areas?

d. Don’t assume you know the answers! Do your research!

The Strategic Grants Evaluation Team can help you report your outcomes to donors and funders in a meaningful and relevant way, right from evaluation framework design.

For more information please contact us https://www.strategicgrants.co.nz/au/services/program-evaluation