Measuring what we do: fundamental not only because we want to know whether or not what we do makes sense (Duh!) but also as a way to learn from what we are doing and so, doing things better in the long run.
Somewhere along the way, however, M&E has become a whole universe with tools (toolkits?), processes, advisers, “matrices”-pushing consultants and the obligatory meetings, coordination committees and global forums, with various degrees of remoteness from the actual operations they are monitoring and evaluating.
A very long tail wagging a very small dog.
But it doesn’t have to be that way. Here are 10 ways to beef up any M&E strategy.
1. Measure outputs, not inputs.
What matters more? The size of the engine, or the speed of the vehicle? Obviously, there is a correlation between inputs and outputs – which is why historically it has been acceptable to measure and report inputs and processes, as a proxy for the real thing. But with time a whole industry has adapted to a reality where inputs are rewarded regardless of outputs. Number of trainings. Number of people reached. That sort of thing. It needs to stop.
2. Measure ONLY outputs
A corollary to the first point, liable to get me in trouble with industry insiders. But i maintain that, with limited resources, measuring anything but outputs creates noise. Which obscures the signal. It also creates wrong incentives and generates work and complexity for everyone. It drives your costs up.
“Come on”, I hear you say, “measuring more stuff is always better than measuring less stuff, right”? Sure with unlimited bandwidth we should collect both inputs and outputs. But we never have unlimited bandwidth. And there is no argument to increase bandwidth as scarce resources are better used elsewhere.
That tendency you feel, to add one more question to that survey/ form? Fight it. Not worth it. Focus. Think of your bandwidth.
3. M&E is not an end in itself
Don’t do M&E for the sake of M&E. Don’t laugh, that happens.
4. Go Back-of-the envelope
You know this scenario: large research planned out as part of big, global consortium involving academic institutions and a vast number of “stakeholders”. Agreeing on the methodology takes 9 months. Another 6 month to write the actual study design plus another 9 months to get all the formal approvals, making sure no wheel gets reinvented etc. Field work – 6 months planned but what with rainy season, etc., make that one year. Data analysis – 6 months. Results will be “disseminated as part of large event attended by stakeholders and donor community”. Try to change anything and you are back at the start. By the time any insight is actually evident, no-one really cares anymore.
We need more nimbleness. More quick-and-dirty. Less ambitious hypothesis, more specifically articulated and tested out in a month, tops. Adjust, then test again, a few months later. Did you stumble on a possible insight? Pursue it quickly. Redesign methodology on the go. Don’t worry about error margins – good enough is what you want here. M&E is not the point – improving what we do is. Rapid, incremental improvements are better in the long term than radical redesigns every 5 years.
5. Don’t expect anyone to work for free
In a recent article here I mentioned an archetypal nurse, overwhelmed by the daily business of doing her job: providing a health service. I argued that she should not be asked to do any stock management and/ or logistics on top of her nursing work. Well, she shouldn’t do any M&E work either. Everything that is not directly related to providing quality health service should be automated or outsourced. Especially the tedious “data collection”.
Do engineers at tech companies do any accounting? Why would we expect more from nurses in some village clinic?
If we cannot avoid the extra work, then we absolutely must pay for it. Filling in forms is work – tedious one too. You can’t build a solid M&E system on the assumption that people will work for free, regardless of all those awesome “capacity building” efforts. Some donors and governments have an issue with paying extra. It’s not “sustainable” they say. Expecting overworked underpaid workers to do extra work for free in the long term cannot be more “sustainable” than paying people fairly for extra work. If you can’t pay cash, at least allow people to earn points, which they can redeem for products.
6. Everyone hates paperwork
We agreed that data collection is work. Mostly, it is paper work. Filling in standardized forms is paper work. Writing and submitting reports is paper work. Filling in “M&E Matrixes” is paperwork. Submitting data using that awesome app on that specially designed, solar-charging tablet is paper work. Tedious, no-fun stuff.
But all that data is needed – there is the government database, our own internal database, the donor database. What to do about it?
Automate, for starters. I am a big believer in virtualizing elements of the actual service delivery/ transaction. This generates contextual/ meta-data which we can analyze. The way we do that at my organization is we integrate a milestone validation process at some point in the service delivery/ transaction – the nurse sends a simple code by free SMS, and she earns points when she does that (because rule 5, right?). The code is either brought to her by the client (a referral, or a voucher) or she triggers it as part of a different process. This simple milestone validation generates sufficient meta-data to make any further data collection unnecessary. Type of service/ transaction. Time and place. Details of the client. How much time passed between the referral and the actual visit/ transaction. And a lot more, all of on the signal side. Very little noise.
7. M&E goes two-ways
You have a field team collecting/ generating all this data. How is that relevant to them? Do they get a bonus if they deliver better outputs? A promotion down the line? Do they have the decision space to act on insights unlocked by all that data?
Or do they simply send stuff up the chain (”feeding the monster”), without any feedback whatsoever?
You have a program team. A M&E team. A research team. Several databases maintained by several consultants. Different donors, with their own frameworks, indicators and systems. Standardized forms for every intervention. Sector specific committees analyzing results. Reports compiled in parallel and submitted to different donors, governments.
All of that adds distance between data and decision. Every extra step creates noise. And additional opportunity for errors. Forms, data collection, apps, tabs, infrastructure. Something will always break.
The quality of the signal is inversely proportional with the distance between data and operation.
The more data can be collected seamlessly, without adding complexity, the more accurate it will be and the stronger the signal.
9. Eliminate human error
Collect transactional, contextual, meta-data. Automate everything you can. If a human touches the data, the human will err. In worse case scenarios – when only inputs are measured for example – the human will try to game the system to maximize their incentives and/ or minimize work. At scale this will flaw your data and swamp you in noise, killing whatever signal you had in your data.
10. Go real-time
If you need six months to get visibility into what is going on, you are missing on a huge opportunity to improve what you are doing. It is 2015 – go real time. By using milestone validation and meta-data, at Triggerise we are learning daily about our operations. That gives us an opportunity to make small adjustments and test things out continuously. We get to see patterns as they emerge and stay ahead of the curve. We pull the plug when things go wrong. We are in control.
Are you part of the problem?
“Easy to say, but our donor insists on those indicators”. “Can’t change – We work on behalf of the government, with their frameworks”. “We are just coordinating pre-existing frameworks”. “We don’t want to reinvent the wheel”; “we advise and support, the government needs to implement”.
In fact there is a lot you can do, right now:
Next proposal for funding you submit, add a section with some of the arguments above. Design your logframe around outputs. In the notes, add a section: “Here is Why we Only Measure Outputs”. Slim down your M&E team and beef up your analytics. Share data with people in the field and give them the power to act on it. Incentivize them to do so. In your own team, link performance to outputs and link incentives to performance. Do dashboards weekly. Talk about them and make them part of the culture. Gamify. Invest in technology that automates stuff and frees up your bandwidth. Don’t over-process that next evaluation – keep it focused and practical. Don’t waste resources and bandwidth researching common sense stuff. Stop investing in gimmicky technology that creates additional complexity – more forms, more processes, more infrastructure, more training needs, more databases. Have a conversation with your government partner. Your donor. Your Head office. Your team.