When carrying out evaluation assignments, South Research focuses on ensuring their ultimate usability and use, regardless of the evaluation objective (accountability, learning, policy support).
Good use of the evaluation results means that the evaluation facilitates learning processes in an often complex institutional environment involving a number of parties with different geographical and cultural backgrounds and, in some cases, diverging interests. As such, each evaluation assignment is part of a broad process of quality improvement. For this reason, the follow-up phase, in which the main evaluation findings need to be translated into new practices and policies, is given due consideration from the beginning.
Every evaluation is tailor-made.
Good implementation of evaluations therefore requires knowledge of, and experience with, a broad range of instruments and methods, which are tailored to the evaluation assignment. Over the years, South Research has used many well-established instruments but has also developed its own toolbox and can carry out both evaluations in which the emphasis is on quantitatively determining the effects of a project or programme and evaluations aiming for a more participatory learning process. South Research often uses a mix of methods that enhance both the participation of the main stakeholders and the reliability of the results. Important aspects to be taken into account in the design, implementation and after-care phase of evaluations in a multicultural setting are gender sensitivity, avoiding biases, and the interests and perceptions of weaker groups.
A few examples of evaluations carried out:
Final evaluation of the Public Private Partnership in the Water Sector in Indonesia. This was the evaluation of a pilot programme involving a collaboration between Dutch water companies and (public) Indonesian water companies. The programme got off to a difficult start and a contested interim evaluation resulted in additional tensions. The evaluation placed strong emphasis on “learning from this experience” and was perceived as such, while still delivering in-depth analysis and putting forward “hard” findings.
Evaluation of the effect of the TUDCN (Trade Union Development Cooperation Network). This was an evaluation—limited in scope and ambition—of a trade union network with members scattered all over the world. This specific setting involved the formulation of a highly unconventional evaluation methodology that centred on an Internet survey and Skype interviews, alongside discussions with a number of key persons in Europe. The results of the evaluation were presented at the general meeting of TUDCN and used to further develop the network.
Over recent decades, a wealth of evaluation practices has emerged in the field of development co-operation. Evaluations have been set up and conducted with different considerations in mind, and have become part of professional development practice. However, research shows that the impact of evaluations generally does not meet expectations.