Ways to evaluate measures

A disability service can implement a variety of strategies to prevent abuse, neglect and exploitation. However, it is important to regularly evaluate the impact of these strategies to determine their effectiveness.

The information on this page provides some basic starting points for approaching evaluation that can be applied to abuse, neglect and exploitation prevention measures. This is not intended as a comprehensive guide or manual for evaluation. There are many approaches to evaluation and many resources available to assist organisations to plan and implement evaluations.

Evaluation overview

Evaluation should be integrated into every aspect of a service, initiative or strategy from initial planning through to formal review, and should be viewed as an ongoing cycle that contributes to continuous service improvement.

Successful evaluation needs appropriate levels of resources. It is important that in the planning stage of your program you include evaluation as part of your budget.

In broad terms, evaluation involves a cycle of four phases:

Plan

  • Establish goals/expected outcomes
  • Determine success measures and key performance indicators
  • Plan what needs to happen to improve the service

Design

  • Address data collection issues – how, who, when

Implementation

  • Provide regular reports to key stakeholders against key performance indicators

Review

  • Engage participants in giving feedback and reflecting on the process and benefits
  • Analyse the data, reflect on findings and reach conclusions on the effectiveness of the service
  • Identify service improvements

[ Return to top ]

Planning for evaluation

Evaluation planning is best conducted in parallel with service planning. This approach will improve both the service and the evaluation. The following framework outlines the stages of evaluation development and implementation:

Step 1: Service description

  • Identify the service plan — goal, target population, objectives, interventions, reach and impact indicators.

Step 2: Pre-planning

  • Engage stakeholders.
  • Clarify the purpose of the evaluation.
  • Identify key questions.
  • Identify evaluation resources.

Step 3: Evaluation design

  • Specify the evaluation design.
  • Specify the data collection methods.
  • Locate or develop data collection instruments.

Step 4: Collection of data

  • Coordinate data collection.

Step 5: Analysis and interpretation of data

  • Tabulate/organise the data
  • Apply statistical analysis (quantitative data) or identify patterns (qualitative data)
  • Interpret the findings

Step 6: Dissemination of evaluation outcomes and learnings

  • What reports will be prepared?
  • What formats will be used?
  • How will findings be disseminated?

[ Return to top ]

Types of evaluation

There are three broad types of evaluation: process, impact and outcome. Each of these has different purposes.

Process evaluation is used to assess the quality and appropriateness of the various elements of service and strategy development and delivery. This type of evaluation can be used during the whole life of a service, from planning through to the end of delivery.

During planning and piloting stages, process evaluation focuses on the appropriateness and quality of the materials and approaches being developed. Process evaluation track the reach of the service, assess how fully all aspects of the service have been implemented, and identify potential or emerging problems.

Examples of questions to ask when designing a process evaluation include:

  • What is required to implement the service?
  • How are staff trained to implement the service?
  • What is required of clients?
  • How do staff select the services to be provided to the client?
  • What do clients and staff consider to be strengths of the service?
  • What typical complaints are heard from staff and clients?
  • What do staff and clients recommend to improve the service?

Impact evaluation is used to measure immediate service effects and can be used at the completion of implementation (e.g. after sessions, at monthly intervals and at the completion of a service program).

This type of evaluation assesses the degree to which service objectives were met. It is important that service objectives are developed and written in a way that enables later judgments about whether or to what extent they have been achieved.

A good tool for developing sound objectives to guide service development and evaluation is to use the SMART approach:

  • specific (clear and precise)
  • measurable (able to be evaluated)
  • achievable (realistic)
  • relevant (to the issue being addressed, the target group and the organisation)
  • time-specific (provide a timeframe for achieving the objective).

Based on the established objectives, develop relevant performance indicators to measure the impact of a strategy. These indicators may be quantitative (e.g. numbers, amounts) or qualitative (e.g. behaviours, perceptions, attitudes, feelings).

Evaluations measure the changes in identified indicators over a given period of time. Measure the indicators before implementation, and then measure them again during and after implementation. The changes show what impact the strategy has had.

Areas that can be assessed through impact evaluation include:

  • changes in understanding of an issue or area of knowledge
  • behaviours or behavioural intentions
  • service delivery, organisational change
  • environmental change
  • policy development.

Outcome evaluation is used to measure the longer-term effects of a service, and is related to judgments about whether, or to what extent, a service goal has been achieved. The long-term effects may include reductions in incidence or prevalence of abuse, sustained behaviour change or improvements in environmental conditions.

Outcomes are benefits to clients in the service. Outcomes are usually in terms of enhanced learning (e.g. knowledge, perceptions/attitudes or skills) or conditions (e.g. increased safety or self-reliance).

General steps to implement an outcome evaluation include:

  1. Identify the main outcomes that you want to examine or verify. Reflect on the goals (the overall purpose) of the strategy and ask what impact the service will have on clients and staff.
  2. Choose and prioritise the outcomes that you want to examine. If time and resources are limited, pick the two to four most important outcomes to examine initially.
  3. For each key outcome to be evaluated, specify what observable measures, or indicators, will show that the service is achieving that outcome.
  4. Specify targets to achieve for particular outcomes. For example, 80 per cent of clients exhibit an increased sense of safety (an outcome), as shown by the following measures … (indicators).
  5. Identify what information (data) is needed to show the chosen indicators. If the service is new, it may be necessary to first undertake a process evaluation to check that it is being implemented as planned.
  6. Decide how the data can be realistically and efficiently gathered.
  7. Analyse and report the findings.

[ Return to top ]

Data collection

There are two main types of information or data that are collected and used for evaluations: quantitative data and qualitative data.

Quantitative data are often collected in normal service delivery records and can be supplemented by special statistical data collections, or through surveys, checklists and service documents.

The type of information in quantitative data covers details such as numbers (e.g. clients, practices used, sessions run, complaints, yes or no responses), amounts, times, incidences of service (e.g. advice given, medication administered). Statistical analysis techniques can be used to provide statistical profiles relevant to chosen indicators.

Qualitative data involve people’s views, perceptions, commentary, description and other non-statistical information on what has occurred. Methods for collecting qualitative data include:

  • questionnaires
  • self-audits
  • interviews
  • document reviews (e.g. case notes, finances, minutes)
  • case studies
  • direct observations
  • focus groups.

Quantitative and qualitative data are often used together in evaluations in the human services context, to give a more complete picture and understanding of what has happened.

Selecting data collection methods

The overall goal in selecting evaluation methods is to get the most useful information to decision makers in the most realistic and cost-effective way. In good evaluations, a combination of methods is used to ensure a more complete picture of the success of the service being evaluated.

When choosing data collection methods, consider the following:

  1. What information is needed to make decisions about a service?
  2. Of this information, how much can be collected and analysed in practical, low-cost ways (e.g. using statistical data, surveys and checklists)?
  3. How accurate, complete and reliable will the data be?
  4. Will the chosen methods get all of the information needed?
  5. What additional methods should be used?
  6. Will the resulting data appear credible to decision makers (e.g. to funding bodies, senior management)?
  7. Will those who provide the data conform to the chosen methods (e.g. will they complete questionnaires, engage in interviews or focus groups and let you examine documentation)?
  8. Who can administer the methods? Is training required or will you need to employ someone with the expertise?
  9. How can the information be analysed?

[ Return to top ]

Analysing and interpreting data

There are certain basics in analysing quantitative and qualitative data that can help to make sense of large amounts of data:

Always start with your evaluation goals

When analysing data (from whatever source), always start by reviewing the evaluation goals (i.e. the reasons for the evaluation). This will help you organise your data and focus your analysis.

Basic analysis of quantitative data

  1. Tabulate the data (i.e. add the number of ratings or rankings for each question).
  2. Apply statistical analysis to the data (e.g. mean, average, range, significance).

Basic analysis of qualitative data

  1. Organise and label data under similar categories (e.g. concerns, suggestions, recommendations, strengths, weaknesses, similar experiences, outcome indicators).
  2. Attempt to identify patterns or associations and causal relationships in and across the categories (e.g. staff who attended training had similar concerns).

When reporting on the evaluation, record conclusions about service operation, such as whether it met specified goals, and make recommendations to help improve the program. Summarise the data collected and provide an interpretation of it to support the conclusions and recommendations in the report.

[ Return to top ]

Evaluating abuse, neglect and exploitation prevention measures

It is important to evaluate the impacts and outcomes of abuse, neglect and exploitation prevention measures so that they can be continuously improved or added to in order to develop and maintain client-safe practices and environments.

Several actions can be taken to help with this evaluation:

  • Before implementing new or revised abuse, neglect and exploitation prevention measures, ask staff, clients, families and carers about how they view the current situation regarding abuse, neglect and exploitation prevention, and what they think could and should be improved.
  • Invite feedback on policies and procedures as they are developed, and afterwards when they are implemented — do this through focus groups, team meetings, surveys, questionnaires to staff and clients, families and carers.
  • Monitor implementation and ongoing use of the measures. For example: gather data on incidents and complaints and look at changes and trends over time (i.e. from before, during and after implementation); seek periodic feedback from teams, managers, clients, families and other stakeholders on specific questions related to the use and impact of different abuse, neglect and exploitation prevention measures.
  • Progressively collate and analyse relevant statistical data and qualitative feedback to identify the impacts and outcomes of each measure over time.
  • Review the evaluation outcomes regularly to ensure that the measures are working and to identify gaps that need to be dealt with.

[ Return to top ]