M & E in Communications for Change Programs: An Invitation to Dialogue and Context in Program Assessment
On May 10, 2017, Rain Barrel Communications and PCI Media Impact organized a seminar in NYC featuring well-known author and Program Evaluation expert Dr. Saville Kushner*. It was attended by 25 international development professionals from UN agencies and NGOs. Entitled “Reality, Rights & Democracy: Three Casualties of M & E”, this very lively, provocative event looked “outside the box” to examine the political, ethical and practical implications of the imperative to “get results” in development projects, during this period of rising challenges to program initiatives and multilateral cooperation.
Working groups addressed several provocative questions posed by Saville, focusing especially on M & E in the context of our organizations’ work in communication for development (C4D):
Who has rights to critical knowledge arising from evaluation? Who should have information rights? Who should not?
Is there a role for evaluation to generate multiple narratives for public debate and argumentation, and collaborate with Communications in convening those debates? Or are we stuck with less complex rights-based arguments?
Is evaluation an element of Communications strategy, communicating complex on-the-ground realities to agencies – or does Communications follow evaluation by disseminating policies, successes and ‘what works’ agendas?
Good programs can produce poor results; poor programs can produce good results. Given that ‘result’ and ‘quality’ are not the same thing, should we be more concerned with results or with program quality?
By pre-specifying results are we not disabling in-country institutional practitioners who need the flexibility to respond to complex and volatile local contexts? So how do we localize results?
We asked Saville to share his reflections and takeaways from the seminar. While participants may differ on the conclusions, all would agree that it inspired a lively and provocative discussion. Here is an edited summary of the topics we explored in the hope that it will continue to challenge our thinking and encourage us to continuously “evaluate the way we evaluate”:
Do we evaluate too much, learn too little, communicate not enough?
All are working in values-driven organizations, and evaluation should support the values and the pro-social purposes of our work. And yet, M & E comes to us as a sort of technology, a set of values-neutral instruments. We want to know about how to assess our work improving the human condition. But our instruments may be designed for other purposes – the measurement of results alone, the meeting of donor accountabilities, the fulfilment of grant requirements – at the expense of values-driven discussion and debate. We must guard against being too limited by narrowly measured results than with the broader quality of interactions in a program, or with those accomplishments that cannot easily be measured.
How do we measure inspiration? beliefs? hopes and fears? the quality of relationships? the strengths and uniqueness of a local culture?
At the seminar, we looked at the overlap between M & E and Communications – each with the potential for generating critical knowledge and two-way information flows. Both have the capacity to reveal data and to help us to learn about how we relate to the data that we gather. People in development projects have the need for information and shared values, suggesting that all evaluation should have a communications strategy. Equally, all communication initiatives should provoke reasoned debate and judgment, and thus the need for critical, evaluative information for their underpinning.
Before objective evaluation there was subjective judgment. No methods, no algorithms, no impact measures – just the interplay of decision and perceived experience. We still made critical decisions, started, improved and/or terminated programs; we could say intuitively that some efforts may have had more merit than others. We made assessments, learned lessons, came to conclusions, communicated messages. We were still accountable. But perhaps those judgments could benefit from an on-going dialogue, in addition to objective data and conclusions.
It’s more transparent now, more organized and efficient – reliable. Evaluation for decision making now has a technology. What we seek to communicate is, hopefully, validated information.
Do we make better decisions as a result? Are our messages more to the point?
Well, they may be better in the sense that there is a point of reference for people to appeal for or against the validity of a decision and to encourage dialogue that leads to greater understanding and knowledge. We must, however, guard against evaluation that displaces judgment, not to justify a decision, but to avoid making one of their own.
But are these decisions themselves better – in their substance and in their relevance to our work? Do we allocate resources better? Make better trade-offs between the claims of those who will win and those who will lose as a result of a decision? Are our communications better informed, more responsive, more saturated with cultural values as a result of the evaluation that feeds into them? I thought our dialogue around these questions was rich and nuanced, giving us much food for thought about our individual practices and institutional cultures.
Does evaluation make us wiser?
We think so but there may be limited cause to celebrate. We must guard against becoming so concerned with refining M & E technology that we risk forgetting ourselves, our ability to make judgments informed by experience, our human predilection for discussing and testing ideas with colleagues through dialogue and conversation.
We might be persuaded that an evaluation of, say, an infrastructure project, has little overlap with a maternal health or a child protection initiative. In our efforts to narrow our scope of work, however, we overlook the larger context. Many of our development interventions across cultural boundaries teach us of the perils therein. This complexity should serve to make us sensitive to the limitations of what we can know and what must be left to interactive searching for a larger context and knowledge that can be harnessed for positive change.
How far do M & E and Communications go in fostering public debates?
These debates are front and center now and should be encouraged in an open and non-threatening dialogue. M & E and communication initiatives are often invisible players in this debate. The question is, can we sharpen our practices to become more relevant and impactful – not merely through harnessing the tools of our trade but through a more intentional embrace of democratic values and forms of engagement for social change. That, to me, was the informal conclusion of our discussion at the seminar, for which I am most grateful.
—————
* Saville Kushner FAcSS, is Emeritus Professor (University of the West of England). His most recent post was Professor of Public Evaluation at the University of Auckland. He is author of numerous books on Program Evaluation, including Personalising Evaluation (Sage, 2000) and Evaluative Research Methods (Information Age, 2017). In 2014 he was short-listed with his co-author for the prestigious Bread & Roses UK literary award for his book, Who Needs the Cuts: Myths of Economic Crisis. Between 2005 and 2007 he served as Regional M & E Officer for UNICEF/LAC, and, in New Zealand, he served as Chair of the oversight Board for New Zealand Aid and as Expert Adviser on evaluation to the New Zealand government. He has conducted and directed numerous evaluation projects in the UK, USA, New Zealand and internationally, including for UNICEF, AUSAID and Twaweza (Tanzania).