Impact Measurement in Volunteering

From VolunteerWiki
Jump to: navigation, search

We all involve volunteers because we want to make a difference; to our service users, our cause, our volunteers themselves and to wider society. The difference we make is the impact. Increasingly, we need to understand and quantify what that impact is, as both our funders and our trustees want and need to understand it. Evaluating and measuring our impact is important. It is also important that we don't just measure the impact of the services we provide but also the impact that our volunteers experience because of their involvement with us. Measuring impact can sometimes feel a little daunting and complex. It doesn't have to be and this article will help taking you through basic concepts, methodologies and techniques around impact evaluation.

What is impact measurement?

Impact is the change that happens as a result of the different activities that are done on a project or bit of work. This change, or impact can be positive or negative, big or small. Regardless, we need to measure what actual difference we are making. While our primary interest may be in the impact that our service makes, it is also important to remember that there is an impact on our own volunteers because of their involvement with us. This impact on our volunteers can often tell a powerful story.

Reasons for measuring impact

There are many reasons for measuring your impact, including:

  • Reporting to funders.
  • Reporting to other stakeholders.
  • Gathering evidence for future funding applications.
  • Demonstrating how your volunteers are helping achieve organisational goals.
  • Influencing public policy.
  • Demonstrating your activities are aligned with your mission and purpose.
  • Improving practice.
  • Marketing and promoting your organisation.

Impact ON volunteers vs impact OF volunteers

It is important to distinguish between the impact of the work done by our organisation by involving volunteers and the impact on our volunteers caused by their participation in volunteering. Having a clear distinction between the two allows us to use different methodologies and tools depending on what we want to measure.

  • Impact of volunteers: it is the change that is made to the community, service users, environment, thanks to the activities carried out by volunteers.
  • Impact on volunteers: it is the change created in the volunteers (skills, attitudes, confidence, employability, etc).

Easy Steps to good impact evaluation

Step 1 - Ask the essential questions

Even though it may seem obvious, the most important question we have to ask ourselves is what are we measuring and why do we want to measure it?

  • How will we use the information we gather?
  • Is it for internal or external use, or both?
  • Will the information be used for funding reporting purposes and or for us to see how well we are performing?
  • What are we trying to "prove"?
  • Are we measuring the impact made by our volunteers, the impact on our volunteers or both?
  • Are we simply counting numbers or do we want more?
  • If we want more, can we pass the "so what" test?

The "so what" test is a good one to consider at the planning stage as it will influence how you design your evaluation. So as an example, if I run a training course and 10 people attend I will want to count that. I may also want to measure how happy they were on the day. So 10 people attended my training and 9 of them said it was Very Good and one said it was OK. That may be enough for some part of the evaluation, but it doesn’t tell me what happened as a result of the training. It doesn’t pass the "so what" test. Therefore to find out how really effective the training was I may want to do some follow up, by checking with the attendees what they did differently as a result of the training.

The next most important question you need to ask yourself is what resources you have. No point in designing a comprehensive evaluation programme if there is no resource, both human and economic to actually do it.

Step 2 - Specifying the measurement

You need to know what you are trying to measure and you need to be clear and specific. The changes or results you are looking for are often called indicators. You need to define what changes you are expecting (hoping!) to observe in the volunteers, service users, society, etc. Normally, these indicators are related to the aims and objectives of the project. It may be quite simple or you may want to think more widely about your indicators and how to measure them. There are many off the shelf models that you can use to help you do this and there is no need to reinvent the wheel! Logic Models and Theory of Change models are popular and easily adapted to your needs. They are also useful for helping you think through how you are eventuating the work.

Step 3 - Defining design and techniques

Now that you know what you want to measure and why you are doing it, you have to decide how you are going to actually do it. It may be that there are different tools that you will use. Using the training example from earlier I can count the attendees simply enough on the day, get them to fill in evaluation sheets at the end of the training and collate them and then do three month follow up calls/emails to establish what changed for them as a result of attending the training (at this stage a sample approach may be more realistic depending on numbers and resources). So three different actions to get the evaluation that I want. Sampling of course is a mine field and careful thought need to be given to it. How representative your sample is and how big is important and should be thought through.

Designs

Before and after ('pre and post-test')

Depending on what you are measuring you may want to consider before and after testing. The data you gather at the start is your baseline. You can then revisit after you have delivered whatever it is that you are delivering, ask the same question and compare the results.

As an example, if I run a marketing campaign to raise awareness of my project I may want to do a sample before I run the campaign to understand how aware people are of the project. After the campaign has finished I may re-sample to then see how many people are aware of the project and also how they became aware of the project. From this, I can tell within reason if the campaign has been worth doing.

Retrospective

This method is not as robust as the previous one, but having a pre and post-test evaluation design is not always possible. That's why we often measure the impact only after the project has started (or when it has finished). For example, you could ask volunteers whether they feel less lonely because of volunteering at your organisation.

Types of data and techniques

  • Qualitative: emphasises the experiences, the behaviours, the feelings, etc. and typically is collected using interviews, focus groups, observations, case studies.
  • Quantitative: emphasises the statistical aspects of the impact, the numbers, the big picture and can be collected from questionnaires, surveys and polls.
  • Primary: this is the data collected by us through our direct experience, for example, if we run a focus group.
  • Secondary: this is the data obtained from published resources, for example, the Scottish Household Survey.

There is no an ideal combination of the different types of data since it will depend on what you are measuring. Also, the available resources (time, staff, budget, etc.) will affect the data and sources you will use.

Step 4 - Data collection

Every technique of measuring has its own benefits and drawbacks. It is crucial to understand them in order to collect the most accurate data possible, otherwise you will gather information which could be misleading which will lead to unreliable conclusions. It is worth ensuring everyone involved in the gathering of the data understands why and how it is being collected and what the purpose of collecting it is. The transcription of information is essential as well. Depending on the techniques used, there are different considerations to bear in mind. There are several software products available which can help you gather and process the data, especially quantitative data. For instance, it is common to use software for surveys where the processing of the data collected is done automatically (ie. Quick Tap Survey, Open Data Kit, Survey Monkey, etc). However, you still need to spend time on the design aspect, especially when there are sections including skip-logic questions. In the case of paper questionnaires it is important to consider the time required to transfer them to the appropriate software (ie. Excel, SPSS, PSPP, Stata).

Step 5 - Analysis

At this point, you have collected and processed all the information you need to measure the impact. Now you start to put together the pieces of the 'puzzle' and begin to obtain conclusions from your data. It is important to consider whether the impact has been done because of the organisation or because of other external influences that have nothing to do with the project. Overall, the idea is obtaining evidence of what you are trying to "prove" (remember the 'Step 1 - Ask the essential questions').

Step 6 - Communication / reporting

Now it is time to inform your stakeholders. Remember these questions from 'Step 1 - Ask the essential questions':

  • How will we use the information we gather?
  • Is it for internal or external use, or both?
  • Will the information be used for funding reporting purposes and or for us to see how well we are performing?

In some cases, you will need a formal document. In others, a visual design with pictures and drawings will be more appropriate. You can share the report via newsletter, or you can present it a public event, for example. It will depend on the responses to these questions.

Final (initial) step

Has the evaluation finished at this point? Not at all. As you know, impact measurement and evaluation is a cyclic and permanent process, so the final step can be considered the first one of the next impact measurement process.

During all the steps stated above, we learned quite a lot about the work the volunteer involving organisation does, how it can be improved, what things need to be changed, what are the strengths and weaknesses, what design and techniques were appropriate and which ones require improvement, how another project approach would have probably had a bigger impact, etc. All these aspects will be taken into account in future work related to impact measurement and evaluation. Also, this knowledge will be really useful for designing new projects and improve management aspects of the organisation.

Resources

More help?

If you would like more help or advice please contact Volunteer Edinburgh on 0131 225 0630 or email: hello@volunteeredinburgh.org.uk
Or you can drop in and see us:
Volunteer Edinburgh
222 Leith Walk, EH6 5EQ