In Pursuit of Development

Navigating by judgment to achieve development impact — Dan Honig

Episode Summary

Dan Banik and Dan Honig discuss the role of ideas and buzzwords in development, how we should measure development impact, and how and why the concept of “navigation by judgement” is a critical managerial tool that can help maximize the positive impact of aid.

Episode Notes

In an excellent book on how aid agencies manage foreign aid projects, Dan Honig argues that tight top-down controls and a focus on target-setting and metrics often lead aid projects astray. 

If one navigates from the top, one may achieve more management control, more oversight, and more standardized behavior. But this may be at the cost of flexibility and adaptability. By contrast, if one empowers those closest to the ground, and focuses on what field agents can see and learn, we may apply so-called “soft information” that will in turn allow for more flexibility. 

Managing large organizations is not easy. And most politicians and bureaucrats struggle to find the right balance between when to control and when to let go. In the book Navigation by Judgment: Why and When Top-Down Control of Foreign Aid Doesn't Work, Dan Honig argues that a misplaced sense of what it means to “succeed” encourages many aid agencies to get the balance wrong.

Dan Honig is an assistant professor of international development at Johns Hopkins School of Advanced International Studies (SAIS). He is currently a visiting fellow at Leiden University’s Institute of Political Science, and a non-resident fellow at the Center for Global Development. He was previously special assistant, then advisor, to successive Ministers of Finance in Liberia and ran a local nonprofit in East Timor focused on helping post-conflict youth realize the power of their own ideas.

Dan is busy completing his next book on “Mission-Driven Bureaucrats”, which explores the relationship between motivation, management practice, organizational mission, and performance in the public service.   

Episode Transcription

(By Ingrid Ågren Høegh)

 

Theme music     You are listening to In Pursuit of Development with Dan Banik. 

 

Banik               It's great to have you on the show, Dan, welcome.

 

Honig              Thank you so much. It's wonderful to be here. 

 

Banik               I've enjoyed reading your work on foreign aid, Dan, and let's begin with how these days there's always this attention on making or achieving maximum impact. In foreign aid circles it's all about effectiveness and impact. In your brilliant book, you argue that putting frontline aid workers in charge may often be a much better strategy to ensure impact and effectiveness of aid, than these more top-down controls to manage aid delivery. Before we get into book, I want to begin by asking you to reflect on, if you can, the role of ideas, all of these buzzwords in development, because the pursuit of development isn't really an easy task, because every day, people like you and me are proposing new solutions, terms, trying to give hope to people, trying to resolve complex problems that they haven't been able to resolve before, so what do you think about all of these new ideas and buzzwords? Because one thing is to come up with them, another thing is to operationalise them, and persuade the people doing this in the field that they should adopt them. But that is not always easy. 

 

Honig              Yes, I totally agree. It's an interesting place to start. The role of ideas and role of the kinds of things that we do, the frames we provide. I guess, when you call something a buzzword, immediately we're biased against it. When something becomes just a term, you know, when it becomes unmoored from particular actions or particular ways of thinking systematically about what it means and what it means to do whatever the thing is, I think that is almost inevitable that it loses effectiveness and becomes just part of the script. But that doesn't mean that ideas don't matter. In fact, they matter quite a bit. I think the fact that development sees so many new concepts and cycling between ideas, let's be more rigorous, let's think more about participants, let's think about what I'm focused on, management strategy, I mean I think part of this is natural and part of it is a sign of a fairly deep dysfunction in the field. Which is, there is a disconnect between the universe of decision-making and thinking and institutions of global North in which we both sit, and the reality of what the task is. I love the name of your podcast, in the sense that "in pursuit of development", like the American constitutional pursuit of happiness, it's about the journey. Development, we can all define it differently, we can all conceive of the actions that are going to contribute differently, but what unites the field is a common desire to pursue it, even in its multiplex forms, and so I think it's inevitable that we end up with a lot of different ideas there. I do think that there is this pure view that just introducing a concept is going to change things, it's usually not the case, because that concept comes into a rich political economy and organisational incentives etc. I also think ignoring ideas is not going to get us very far either. I think we see that ideas shape how people think and how they act and the concerns that they think about.

 

Banik               Yes, indeed. Some years ago, some of my students, after hearing me talk about all the stuff that didn't work, all the bad things in development, and as academics we're always critical to everything, and they challenged me at the end of the term to come up with a talk that focused on what worked in development. Like success stories. And because of these students, I developed a module, I did a MOOC, and for many years I've been trying to understand or balance everything that doesn't work, but also what works. But, Dan, I've also understood now, increasingly, that in the aid world, and this is something I see in your book too, and in your work, that there is sometimes too much pressure to prove that an aid project is successful, pressure of results, pressure to legitimise yourself, your actions, taxpayers worried that aid transfers are not having an impact, and all of these corrupt leaders misusing funds, the pressure to tell if you're an aid worker in the field your bosses back home everything is fine, so that they can report to politicians that everything is working well. So, what I'm trying to get to is this problem I've sometimes observed, what you term as "field agents" or aid workers or NGOs, may not always be keen to talk openly about what is not working, for fear of losing funding or for fear of jeopardising their careers or their financial futures or their organisations. Is there too much pressure to prove that what they're doing is successful?

 

Honig              The short answer is yes. The long answer is, I think it's a lot about the nature of the proof. What I mean by that is, first I totally agree with the diagnosis. Often people in the field have not just no incentive to discuss failure or anything other than overwhelming success. I guess what you point towards, to know if we're doing something well, we have to be able to accept our failures and think about what's gone wrong. But a big part of this dysfunction is in the nature of the proof that we require all the way along the way. Which is usually in the form of quantitative metrics. Things that can be reported. Summary statistics. In this sense it relates a lot to the podcast with Jim Scott and this idea of what happens when we try to make things legible from afar. In Scott's terms, high modernism means that we focus on the things that can be observed from the helicopter above the city plan, as he put it in that podcast. And I think what that means is that we end up crowding out all the nuanced information and judgement it takes to actually make things effective. So, I try to distinguish between actually having an impact and justifying that's the case. One of my papers has the title: "When reporting undermines performance", and I think in some ways that's the central, most solvable dysfunction, that I see in this organisational system. And, that's this idea that by quantifying everything, by hitting targets, not only do we allow programmes to be effective, but also we restrict what it means to have impact to the sort of outputs that often become unmoored from the actual development objective that they were once good proxies for, but stopped being good proxies for when we put pressure on them to report development success. So, I advocate for incorporating more judgement, more knowledge, into the pursuit of development, and building systems where we judge impact and think about accountability as something different than just what can be counted and reported. I often say that, often every aid agency has a version of the statement: we are committed to making every penny count, and I guess my perspective is that we don't make every penny count by counting every penny, or even measuring every output of the project, but rather by thinking more holistically about what we are trying to achieve and empowering the field agents, NGOs, local leaders, those further down the delegation chain, and empowering their judgement, because they're the only ones with the relevant information to make good decisions, and so we need to rely on them to get the best possible aid outcomes.

 

Banik               That's one of the many reasons I really liked reading your book. It really is fascinating reading, and I would recommend it to every person I know in the various aid agencies. I can be your manager in promoting your book. I've decided that I should perhaps stop being a professor and become a professional podcaster. But before we discuss the book in detail, I was wondering whether we could discuss a bit about measurements, and I know this is something you've been working on, right? There has been in recent years this trend towards more data. More quantification. And some of it is because of all these ambitious goal-setting projects that we've undertaken, MDGs, SDGs, so the critique has been that you can have ambitious goals but the indicators themselves are very off-putting, or only measure one little thing, and there's been this critique that by measuring these indicators we're not really saying something about these ambitious goals. But, nonetheless, these goal-setting projects have made us focus much more on measurement. And there's also been this movement from measurement outputs to a focus on evaluating impact, some have been arguing for randomised trials, but even these have been criticised for not really always being the gold standard and they are not always able to capture these RCTs, these complex dynamics on the ground. In some of your work, I've seen that impact is perhaps easier to measure in some sectors than others. What I'm trying to get at is how should we rethink measurement? What are the typical problems that measurement techniques face and how can we measure smarter?

 

Honig              It's very much a horses for courses strategy, that is to say it's not only do we need to change the actual measures we use in different sectors, so I'd say the first question is diagnostic, so what world are we in - let me give a couple hypothetical, invented on the spot, examples. If we're trying to build a road, in a relatively stable place etc. we can probably come up with a fairly coherent plan at the beginning, and we probably should have something like a blueprint approach, which often gets criticised. But in this case might make sense, to pre-specify not just the outcome, but also the outputs or stages along the way, and observe and measure quantitatively, rigorously, whether we've done it. If we wanted to know if one road-building technique worked better than another, I think an RCT is a great way to do that, and that sort of regime, that way of knowing about the world is probably going to serve us fairly well. In other cases, we can't look at the process, but we can see the outcome, and that is the case for payment for results or outcomes or impact. So, that could be a case of, we can't observe the production process of the child nutrition programme very well, but we can see rates of stunting, which is what we care about and we can evaluate different ways of doing it. In that case we might not want to have a blueprint type approach, we might want to give more flexibility, but there again, on the impact side, we could do an RCT and look at stunting as an outcome and understand reasonable well which intervention worked better though why would be another question and we might dig into that with other evidence. But there's a whole range of questions, and those are not limited in my view to things that involve politics or political economy. We are unlikely to get a good stable measure for either, on outcomes or outputs. One example that is more political is something like civil service reform programme. Where we might need to adapt and navigate along the way and where it's pretty hard to see what our outputs or outcomes would look like. Also, pretty hard to randomise the civil service. Very hard to treat one group but not the other. It also can be much simpler things, like delivering capacity. Capacity building programmes often - in the US government, the standard measures for aid projects are run by the Foreign Assistance Bureau at the US State Department, and I explain that to explain why they are called the F-indicators. I think the F-indicators are incredibly well-named because I would give many of them an F, were I to evaluate them and one of those is the standard indicator for training, which is number of people trained. If I went back in time, if no one had any incentive to report this number, and I looked at trainings that happened before anyone was thinking about this and said, which trainings touched more people, or fewer, that seems like a plausible input into seeing which project was more effective. But of course, as soon as we put pressure on that measure in a managerial sense, if we say what the NGO or USAID field personnel need to report on is how many people got trained, we can see immediately how that is going to maximise something other than the actual goal of delivering useful information or new ways of thinking to people who are maximally able to use them. And indeed, if half way along the way we realise that what's happening - I used to run a non-profit in East Timor, and I remember a programme that came in and provided wonderful training to agricultural extension workers working for the government, and what those extension workers then did with the skills that they had developed was quit the public service and go work for private firms where they could make more money. So, it was indeed capacity building on an individual level but if the goal of the programme was to improve agricultural extension, it maybe did not accomplish that purpose. And if we make our ways of knowing about the best possible metric, even when that's not a very good metric, we're going to run into some of those problems, and we're going to get exactly what we measured, rather than what we hoped for. There's a management paper by Steve Kerr, the title is "The Folly of Rewarding A, while Hoping for B," and I think this folly is in development, but it's not uniformly distributed, and the first question we need to ask is what world are we in, how good are our measures? And if our measures are pretty good but not awesome, maybe we need to do things very differently. 

 

Banik               I'm glad you mentioned the East Timor example, because that's how you start your book. That's a great example of just perhaps measuring the wrong things, and in the process losing our focus on what the real purpose was with the project, when you train people and they end up leaving, and who can blame them if they want to move to another more lucrative profession, right? And this applies to nurses in Malawi, who are maybe trained by the government that spends a lot of money, and then end up in the UK because they get much better salaries and working conditions than in Malawi. So, when you were talking about capacity building, it reminded me of some projects that I've been involved in. I have to say that I actually don't like that term capacity building. I feel sometimes it can be a bit derogatory. Even empowerment is one of those terms: "I'm empowering you in relation to somebody else."

 

Honig              I'd put on the list "target communities", so there is one group of people aiding and another who are the targets of the intervention. I use the term entirely because it's conventional, not because I like it. 

 

Banik               This is mainstream, so everybody uses it and fine, but even when we think we've measured impact, Dan, attributing these, whatever we've measured, to a specific intervention, isn't always easy. And you were talking about civil service reform, anything that has to do with capacity building, we have to somehow, we end up coming up with measurable things that maybe are not important in the long run. But these are important for reporting, for showing that this project had an impact, and I sometimes feel that this whole results-framework approach in aid, I find it perplexing. How do you use these neat categories of activities, outputs, and impacts, I sometimes just don't understand, how I can even claim that some activities and their outputs have had an impact? I struggle to precisely identify these neat categories. What would your suggestion be for aid workers, scholars, interested in understanding the linkage? Should we be adopting something other than this typical results-framework approach?

 

Honig              Yeah, I think we should. And I agree completely with the way you describe the state of things. In saying that, I don't think we shouldn't care about results. I just think when caring about results isn't caring about results frameworks, we should care about the results, not the frameworks, right? And similar to what you said about attribution, you know, my friend and co-author Lant Pritchett has a paper called "Let's all play for team development," or at least that's a key phrase in the paper, and sometimes playing for team development means it's not going to be clear who scored the basket for team development, and when that's true, we all know that sports teams work better when the individuals on the team care about the collective goal, rather than individual status. Almost every sport has a name for a player who looks good on paper but doesn't contribute to the team, and we see that in development, and the focus on what can be reported in a narrow, quantitative measure, is part of the problem. And when that happens, I just want to pull out a point that was kind of implicit in your framing there, when that happens, that's not just a hassle to write that up and report it, it actually distorts the activity that gets done and so we end up with a lesser chance of victory over what we are collectively trying to address, in the case you gave of civil service reform. So what alternative frameworks are there? There are some suggestions of different ways of doing it. So, in their book, "Building State Capability", Andrews, Pritchett and Woolcock propose the idea of a search frame, rather than log frame. 

 

Banik               I like that book. And Lant was here in Norway and he gave a talk at Norad a couple of years ago, I enjoyed that. 

 

Honig              I will say that in my empirical work, in some ways the interventions that are most effective do so by essentially gaming the results framework. What do I mean by gaming? We write in metrics that we think will be satisfied without our activity at all, so things we think will happen just because of trend. And that gives us the space and flexibility to do good work and treat accountability in practice differently than we do. Of course, that all has to be in the breach of the way the official system is meant to work. There is a wonderful paper, "Hiding Relations," and this is a kind of hiding relations move, where what's functionally de facto how we manage our intervention is very different than how we manage our results framework we're in. But if I were to suggest something more systematic and something that didn't require operating in the shadow of the agency, I would say that the place to start is again with the question of diagnosis. So, where are our results framework distortionary? Which is something we can think about systematically. Where they are distortionary, what alternative ways of achieving results can we conceive? I think in development, it took me a long time, as a talked about this book, I would often get this question that I found totally confusing, if I'm honest, the first few times, the typical way of asking this question would be to say something like: "Dan, you've just presented all this stuff, I think I agree with this diagnosis, it is the case that our management systems or results frameworks are getting in the way, but we care about accountability, doesn't this undermine accountability?" And what I found confusing about that is if we have a view that what we're trying to do is maximise results, or maximise value for money, then if you've conceded in the premise of the question that the thing we're talking about, you don't have to agree with me, but if we're in agreement that the thing in question reduces value, while costing money, it undermines results, then what does it mean it doesn't give me accountability, if accountability is meant to be a kind of tool to achieve that outcome? And, you know, I guess over time I've adopted this view that in this sector, there is this implicit view that what it is to be accountable is to count things, to measure and report them, but of course accountability can be achieved in lots of different ways, and the dictionary definition of accountability is all about giving account, justification, understanding, not about certain quantitative reporting. I think a system that gets beyond the distortionary results framework needs to be one where there is more conversation, more interrogation of judgement, rather than a reliance on what we can quantify, transmit, thin information in the Jim Scott sense, rather than thick information. 

 

Banik               That's a great point. I enjoyed reading that paper where you reflect on your book tours, I'll include it in the show notes so that others can read. I think that's a great thing. Sometimes it's good for us to refine our thought process after we've written a book and gotten feedback. We'll return to this later. You have coined this very attractive phrase: "navigation by judgement" and this, as I understand it, is about allowing these personnel of international development organisations to incorporate soft information and information that field agents can observe but that, as you were saying, can't be easily codified or verified by their supervisors at headquarters in donor countries. What I really like about this concept of navigation by judgement is that it's allowing development projects to be more flexible, if this kind of thinking is adapted, it will be more flexible, one could adapt to changing circumstances, and you've been arguing that navigation by judgement is a critical managerial tool that could help maximise the positive impact of aid. I'd like you to reflect on how this concept of navigating by judgement can be distinguished from the terms: autonomy and discretion. 

 

Honig              So, I should say at the outset that I don't care what colour the cat is as long as it captures my school of thought on concepts. To the extent that navigation by judgement overlaps with terms you find more useful, whether the autonomy or discretion or doing development differently or problem-driven adaptation or any of the other terms that threaten to become buzzwords in the sense that we started the conversation, I'm totally fine with that. But I guess the reason I coined a new term here is because autonomy and discretion, so discretion is usually, conventionally thought of as what is granted usually by politicians to implementers, at least in public administration and public policy, and autonomy is the agent's ability to act independently, which could the result of the grant of discretion, or it could be sort of seized, so I might act autonomously because there's a span of control problem and my supervisor can't supervise me even though they are meant to, and so I have autonomy to do more of what I want. And if my judgement is, as I mean it, a sort of managerial judgement to be taken for bureaucrats by bureaucrats, in some sense, the idea basically is that those at senior levels of an agency can decide, can elect to manage their agency in such a way as to empower the judgement of those closer to the ground. In that sense giving them autonomy. Or, if we want to think about it in a non-political sense, giving them greater discretion, but where that discretion is in some sense a zone of independent action, and in some sense is a different way of kind of thinking about how their actions are evaluated, and how we work collectively to inform how our judgement and to become better practitioners of whatever it is we're attempting to do. And in that sense, navigation by judgement is - the cover of the book is a gentleman at the steering wheel of a boat, right? And the cover image is in Massachusetts, in a sea-faring town, it's the setting of the movie The Perfect Storm, and the book, and basically the reason I use it is as the cover is because I think fishing fleets are roughly analogous here, in that there is a managerial strategy which is that we are best off having the person who steers the boat be the one who is actually on the ocean, as you say, able to adapt and respond to the waves, with their hands on the wheels, rather than trying to navigate by GPS from shore. And you could describe that as a corporate approach to give people sufficient autonomy, or to grant discretion over boat behaviour. But I guess I think of it more naturally as a decision where we should navigate by judgement. Where a managerial strategy should be such that we make sure that those centrally responsible for guiding what happens are the ones closest to it. 

 

Banik               I think you make a very persuasive argument there. I'm thinking about how you would consider the fact that we as individuals, we are very different. Some of us like to talk, some of us are very quiet, some of us can be provocative. To what extent do you see personalities determining the kind of feedback or judgement? Sometimes I believe that I know what I'm talking about and I know I'm right, and sometimes I can make a persuasive case for others, other times I'm unsure, so it's only when someone else says it that I feel I should have said it first, you know? So, how do you think individual personalities are important in this argument that you make? If you are a junior, you may be fearful of saying what you mean because your senior would say you don't have any experience, it could be the political culture of the organisation, right? It could be how the relationship is between the juniors and middle level management and seniors. All this factors in, does it not?

 

Honig              Totally. And to start where you started, with personalities, also the fact that we could be wrong, right? It certainly is true that sometimes I think I know what I'm talking about and my wife disagrees, and she's often right about that. It's not simply dispositional. There is also the simple fact that people will differentially make good decisions if faced with the same circumstances, not just express those decisions differently. And, to make the problem even harder for myself, people are going to differentially care about making good decisions. And it is certainly the case that what I'm advocating is a managerial system that's going to allow those who care the least to do less. If we imagine someone who truly wants to shirk in any job, I don't think this is very common in NGOs or aid agencies, but to the extent that there are those who don't really care about the objective, more shirking should be possible in a navigation by judgement system. I think we often imaging that if something floats through the head and heart of an individual, that it can't be thought about systematically. What I mean by that is, we already have where individuals come into an agency, experience a management practice, and respond to it. I know many people who have left the profession or left agencies precisely because they find that while working for these agencies they cannot actually engage in the laudable welfare-enhancing actions that brought them to the field in the first place. So, I think there are selection dynamics that work the other way too, in the sense that we often systematically experience adverse selection out of precisely the people, or some of the people, who would be most kind of amenable to, or would achieve the greatest results under a navigation by judgement kind of regime. And so, there will always be heterogeneity, variation in how people respond to any management practice, and I think we should, as an agency manager, I think I should think about that and evaluate that and figure out what is working for who and adjust. Some people are not appropriate for certain tasks, some people are not appropriate even for certain agencies, and that's a natural part of labour markets. This does not mean we can't think about it systematically and I think we should, and it would be our benefit to do so. We can also group task, so one of the things I suggest in the conclusion of the book is to the extent there are people of different orientations inside the agency, if certain types of people are more, better for the road building projects, and certain types of people are more naturally inclined towards the stunting reduction projects, and others more inclined to deal with the inherent messiness and insecurity of the civil service project, we can think about that. It need not involve entry and exit, although it can. In terms of getting it done, which is where you turned with the kind of mention of the different levels in the agency and with your note about politics, let me start with the politics. I think a lot of people who work in aid agencies have already had the thought that this sounds lovely for a pointy-headed academic to have this idea, but I work in the real world, and the fact that these numbers are distortionary I completely agree with, but what am I going to do about that? And it certainly is right that politics, what executive boards and parliaments and such want to see is going to matter here. But there too, I think there is an opportunity for more dialogue. I once had a conversation with an aid sceptic important in the political environment in America, to which he said he'd like to, if he could, eliminate all foreign aid. And I said, fair enough, we disagreed on that, but as long as it exists, would you rather it actually helped the people it went to or not? Would you rather the system provides you information that looked good, but did not actually achieve the results, or not? You might care about winning hearts and minds, and I might care about improving welfare, but we share a goal that instrumental on the project existing, it would be nice if it actually accomplished it's actual goals, rather than just seeming so and accomplished very little. I think there is room to change that conversation. One of the things, turning to the levels question you had, you mentioned one of my papers, what the premise of that paper is all about is I had a theory of change when I wrote the book, which was lots of people maybe don't realise that this reporting system, results framework world, is undermining performance. When I went around talking about the book, what I realised is that I was largely wrong about that, at least in regard to people who work at aid agencies. That is, there is widespread agreement with this diagnosis, what there isn't is a sense of what to do about it. As somebody senior in an aid agency wants to put it, this is fantastic, Dan, you've sophisticated methods to show us what we already knew. What can you say that we don't know? And I think that one of the things I can say that many people seem not to already know, is how widely this diagnosis is shared, right? People often feel the need to confidentially tell me that they think this is a spot-on way of thinking about it, but that they are alone in their agency in thinking so, even if they are the 30th person in their agency to tell me that over the course of two days. To your question of what people at different levels of the organisation, there is a kind of pluralistic ignorance about this, that is many people are not aware of how widely their views are shared, and starting to push back or ask questions, asking questions like "Why is it this way?" or "Is that a rule or a custom?" "I don't understand, I'm new here," these are good questions to ask, and in some ways the people at junior levels of the agency, in addition to often being the closest to implementation, depending on agency and role, have the advantage of it not yet being a career. If you are a junior in an agency, and you joined your agency because you cared about development impact, wouldn't you rather see if you could move your agency in that direction? In some way I think junior folks have the least to lose, in part because they have the least long histories in their agencies, but there are things that all of us can do. In that paper I collect all the good ideas I heard from people in aid agencies and present them as if they are my ideas, which they aren't, but I'm clear on doing that. I think there are lots of wonderful suggestions and things that people are already doing to try to achieve development impact. 

 

Banik               In a way it seems like you are advocating for a bit more activism, or that we can't be as impersonal than if we were a bureaucrat within any other ministry, that maybe we have to show a bit more interest and vigour in trying to change, and that is actually what I was trying to get at in the earlier question, that it doesn't come naturally, I suppose, to a lot of people, but when you were replying to my earlier question, I was also thinking about the role of knowledge and solutions. So, if I join an aid agency, and a lot of people who work in aid agencies say in Norway, I've taught them, I hope that when they join the agencies, they are aware of the literature and whether there are actually concrete solutions. And sometimes there aren't any direct solutions on the way forward, it is more about navigating difficult terrain, about maybe feeling your way through, and I think in those situations it is more difficult to provide that kind of feedback to say that this path is wrong, we should be changing paths. I sometimes have been frustrated with the Jeffrey Sachs argument that we know what is to be done, we just need money. I don't think it is as easy as that. 

 

Honig              I mean, hard for me to criticise Jeffrey Sachs in that he shares my tiny home town of Oak Park, Michigan, so you're speaking to the second most prominent and probably only other development scholar from Oak Park, Michigan at the moment. That said, I agree entirely that solutionism only gets us so far, and that's not very far at all. Before jumping into that, just to say on your point about bureaucrats, it's lovely that you went there. My current book manuscript is currently titled: "Mission-driven bureaucrats" and it's about public sector agents in the developing and development world across a range of bureaucracies, and arguing that we often get better outcomes by focusing on supportive management and encouraging mission-driven action, rather than monitor and control. The agent bit that we've been talking about travels relatively widely, and in a few years when that book is out, I'd love the opportunity to come and try to convince you and your audience. On the question of what your students do, what my students do, as they enter agencies, I think the crossing the river by feeling the stones metaphor is really an eloquent one, and we can pick up from there. I agree entirely. It is rarely a case that we know that the answer is to build a bridge over the river and what kind of cement we should use and what kind of supporting... People who want to be in the domain of answering those kinds of questions, there's actually a wonderful and fantastic, I am in favour of the infrastructure that supports that work, I have in mind things like JPAL and Evidence Action and the world of results-based policy making. I think that world is fantastic, I just think it's far from universally applicable to the range of things development agencies do, and indeed misses many of the most important ones. And there, when we are feeling the stones, the way any group of people feels the stones across the river is first as a group. It's very hard to do individually. It's easier to do if we collectively discuss where we are going. It's hard to go back to exactly where you started. It's hard to do if you're never allowed to take a wrong step. It is almost certainly the case that if we can't see all the way across, that if we do that, we're going to end up in a local maximum, get across the river but end up on a peninsula or close to crossing the river and need to back up, and think about it again. And only if we give ourselves the space to do it are we likely to actually feel our way across the river. It's also going to be slower, right? So, I use a kind of near neighbour metaphor in the book, between two types of cars. I'm from Detroit and we can imagine a Corvette that's faster than a Jeep on a well-paved road. But if we go off-road, the speed of both cars will fall, but the speed of the Jeep will fall less, right? We need investment in both. I'm suggesting that conventional wisdom is, that almost anyone we ask would agree, that that is probably not a likely route to success, and they'd be right about that. 

 

Banik               I feel I haven't done justice to your book, because we really haven't discussed the case studies, so towards the end of this conversation, Dan, could we focus on USAID and DFID projects that you studied in Liberia and South Africa in relation to capacity building and health, and I found it quite interesting that you conclude that in some instances, navigating by judgement will not work, and you have this really impressive table on page 25 of the book, the costs and benefits of navigating by judgement. I'd like us to conclude this conversation by discussing very briefly if you can, some of the conditions where you think navigating by judgement is useful, and some of the conditions where you feel top-down management is the best strategy forward?

 

Honig              Yes, thank you. I feel no injustice done to the book. Basically, what I theorise and then test in these case studies of USAID and DFID in Liberia and South Africa in health and capacity building, and then complimenting that with econometric analysis of 14,000 projects across 40 years across 180 countries, across every development sector that is coded, is basically that these are strategies that have differential returns and that are more appropriate for different kinds of tasks. These being navigation by judgement on the one hand and navigation from the top, but we can think of us a conventional log frame, results matrix, quantitative target setting approach. Basically, this table summarises what I think the advantages are. Some tasks need more agent-initiative, more soft information, require more flexibility and where this is true, we need navigation by judgement. But that doesn't mean that this is always going to be the best strategy and the case that you highlight in the question, is one where USAID which largely navigates from the top, and DFID which navigates more by judgement, and both agencies are trying to deliver anti-retroviral drugs to pregnant mothers in South Africa as part of prevention of mother to child transmission of HIV/AIDS. Basically, USAID says let's measure everything, put targets on everything, let's figure out how many pills go into how many mouths. I'm not sure that's a great way to build the government systems to continue that activity, but it sure does better accomplish the purpose of delivering the drugs to people, and cut through, South Africa was in an era of denialism, where official government policy doubted the link between treatment and HIV/AIDS, than does DFID's strategy which gets caught up in politics and meetings and general political economy which is unfavourable to the activity that they are attempting to conduct. I think that's the great case for measures, where we have clear metrics that are reliable and will not distort the activity, and what we care about is delivering the thing, we should use them. Where we have clear measures of outcomes that we can evaluate as in the road building or the stunting, hypothetically, we should use them. When we are delivering vaccines now, in this Covid-world, we should use them. These are activities that are incredibly important to welfare and incredibly important to metrics, where what we care about is oversight, standardised behaviour, where good delivery probably does not involve a lot of judgement, so putting a vaccine into the arm of someone requires only a limited amount of nuanced soft information that can't be observed from afar. And when that's the case, we should not navigate by judgement. The argument is not that this is a management answer for all seasons, it is that there are many important seasons where it is indeed far superior to the current options we are taking. And we don't see that because we allow our systems to lie to us. And we encourage them to lie to us. And most people who work in the aid industry, many at least, agree with this and have to labour with that reality but also work in agencies where they have the sense that what they are doing does not accomplish the thing they care about doing, or they are not aware of what it does because of the way they have to think about data and results. And where that's true, it's a shame for the enjoyment of those jobs, but more importantly a shame because it means that those dollars from those citizens who are asking for accountability are not optimally spent, and sometimes not well-spent at all. And to the extent that we as a universe of people, as scholars, as practitioners, are ultimately accountable for that money, by which I don't mean counting things, I think we owe it to the citizens of both the developed and developing world to actually act in pursuit of development, rather than in pursuit of a mirage of development that seems like it is making progress but in fact satisfies no one. 

 

Banik               I've had such great fun chatting with you today, Dan. Good luck with your new book and I'd love to have you back on the show when that's out. Thank you so much for coming on the show today. 

 

Honig              Thank you, my pleasure. Am I the first Dan to be your guest?

 

Banik               You are indeed!

 

Honig              I'm honoured to be the second Dan. Thank you for having me. Thanks for the questions. 

 

Banik               If you enjoyed this podcast, please spread the news among your friends and share it on social media. The Twitter handle for this podcast is @GlobalDevPod.

 

Thank you for listening to In Pursuit of Development with Professor Dan Banik from the University of Oslo’s Centre for Development and the Environment. Please email your questions, comments and suggestions to inpursuitofdevelopment@gmail.com