Learning Disability Today
Supporting professionals working in learning disability and autism services

Transforming practice into policy: a bridge over troubled water

 

Reading the Learning Disability Today blog post ‘How unof?cial social policy drives change’, written by the CEO of charity Choice Support, prompted us to offer our own contribution to the debate about what the catalysts are. Steven Rose, in his article, implied a kind of non- interacting parallelism between policy as represented in government papers and practice as it may be reflected in the academic or professional press. Ideally, of course, practice and policy should enjoy a fruitful interaction, with policy papers capturing and promoting the best in practice and practice stimulated and facilitated by policy. If this is not case, what might be done to improve matters? In this piece, we are advocating programme evaluation as a bridge between practice and policy.

Programme evaluation is a process by which data and information is collected to answer questions about the success and methods of projects, policies and programmes. Typically, it focusses on their effectiveness and efficiency since – in both the private and public sectors – stakeholders, including funders, managers and practitioners, want to know whether the programme they are funding, implementing or promoting is actually achieving its objectives. Funders often insist on some form of programme evaluation to ensure that they are receiving value for money. For example, the Sure Start programmes funded by the UK government made it a condition of funding that 10 percent of the budget should be spent on programme evaluation. (Ironically whilst the Government amassed a large number of programme evaluations through this requirement, they also funded a central evaluation which was based on inferior data and had little impact, demonstrating that programme evaluations are not always taken up in the way we are suggesting.)

 “An evaluation we undertook of resettlement from a hospital to supported accommodation revealed some short-term decrement in social interaction which was addressed during the programme through enhanced opportunities in the local community.”

While programme evaluation is centrally concerned with these issues of success and value for money, it also has a number of additional benefits. Data gathered in the evaluation can be fed back to practitioners both during the programme and at its conclusion to improve delivery in a cycle of continuous quality improvement1. This process of formative and summative evaluation helps providers to learn from their experience and continuously improve their programme.  For example, an evaluation we undertook of resettlement from a hospital to supported accommodation revealed some short-term decrement in social interaction which was addressed during the programme through enhanced opportunities in the local community.

The descriptions of the programme produced by the evaluators can be used as a basis for its development or replication. A programme evaluation should provide a blueprint for the programme so that achievements and lessons learned are not lost. The evaluation report can provide valuable publicity for the programme, demonstrating its cost effectiveness and providing a basis for further funding.

In particular, the evaluation report can include recommendations for future practice and policy and thus link innovative and successful practice with policy formulation. Practitioners are unlikely for a number of reasons to capture and publicise their own practice. Here the evaluation report provides a valuable record of process and outcomes from which others can learn. For practitioners, programme evaluation can be a celebration and a record of success.

In the model we propose, where programme evaluation can bridge practice and policy, there are four important stages. First, programme providers have to be convinced that a programme evaluation could be a valuable and cost effective use of their funds. It takes a commitment to invest funds in an evaluation when they could be used for direct service delivery. Choice Support has adopted an evaluation strategy and as part of this the Social and Health Evaluation Unit was commissioned to evaluate a shift from waking night to sleep in support for adults with learning disability. Following this a further evaluation was commissioned to look more broadly at a Personalisation Programme in which the Sleep-In support was a part along with individual service funds; individual care and support plans; and support circles to ensure a coordinated approach. Both evaluations found that the quality of life of people in supported residential accommodation was improved with potential risks managed and savings achieved. The evaluation reports were subsequently published by the Centre for Welfare Reform as Better Nights and Better Lives. 3 4

The second stage is the evaluation itself, where various approaches can be adopted. In the Social and Health Evaluation Unit, we use a model we call the Trident 2 which focusses the evaluation on three main questions:

• Were the anticipated outcomes of the programme achieved?

• What was the process whereby the programme was delivered?

• What did the various stakeholders involved in the programme think of it.

The prongs of the evaluation trident are thus Outcomes; Process; and Multiple Stakeholder Perspectives.

At the start of the programme the intended outcomes are identified and confirmed and agreement reached on the data which will be gathered as evidence that the outcomes have been achieved. For example, in the night support programme the outcomes included management of risks; improvement in quality of life; and savings. These were measured through specially devised audit tools. Description of the process included details of staff deployment; technical support and staff training. Stakeholder views included those of the residents themselves; support staff; managers; and parents and relatives.

Our evaluations are conducted as a partnership between providers and evaluators and supported by a Steering Group including these partners and also purchasers, commissioners and, if possible, service users. The evaluators seek the approval and support of the Steering Group with regard to the evaluation questions and the data that will be gathered to address them. The Steering Group receives interim and final reports.

Thirdly, we believe the final report of an evaluation should always culminate in actionable recommendations for practice and policy. We aim to address recommendations to individuals or groups who can actually implement them. At its final meeting the Steering Group agrees the recommendations and agrees to support their implementation.

The fourth stage of this model is the receipt of the report by policy makers at local regional and national levels. This is intended to maximise the impact of the programme with regard to future policy and practice formulation. In this way, the report of a programme evaluation can certainly bridge innovative practice and policy formulation. In the next government policy statement on support for those with learning disability, we will be interested to see how many references there are to published programme evaluations.

 

References

Fereday S, Riley E, (2007) Social Care Audit in Practice Health Care Quality Improvement Partnership London

Hogard E, (2016) Using the Trident in Program Evaluation. (Ch.4) Kishor Vadiya (Ed.) Program Evaluation for the Curious: Why Study Program Evaluation? The Curious Academic Publishing Press. ISBN 978-1-925128-40-6.

Ellis R and Sines D (2012) Better Nights Centre for Welfare Reform

Ellis R, Sines D, and Hogard E, (2014) Better Lives Centre for Welfare Reform

 

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More