In Pursuit of Development

Humanity's Enduring Quest for Power and Prosperity – Daron Acemoglu

Episode Summary

Dan Banik and Daron Acemoglu discuss the critical role of technology in shaping progress throughout history and today. The way we utilize technology can either benefit a privileged few or promote widespread prosperity.

Episode Notes

We engage in a discussion centered around Daron Acemoglu's latest book, co-authored with Simon Johnson, titled Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. The choices we make regarding technology can either advance the interests of a select elite or serve as the foundation for widespread prosperity. But technology's trajectory can be, and should be, controlled and directed for the benefit of all. The remarkable advances in computing over the past fifty years have the potential to be tools of empowerment and democratization, but only if decision-making power is dispersed rather than concentrated in the hands of a few overconfident tech leaders.

Daron Acemoglu is Professor of Economics at the Massachusetts Institute of Technology, MIT.  @DAcemogluMIT

(Cover photo of Daron Acemoglu by Cody O'Loughlin)

Key highlights:

Host:

Professor Dan Banik (@danbanik  @GlobalDevPod)

Apple Google Spotify YouTube

Subscribe: 

https://globaldevpod.substack.com/

Episode Transcription

 

Banik

Daron, it's lovely to see you again. It wasn’t that long ago that we last chatted. But anyway, I'm so happy to see you. Welcome back to the show. 

 

Acemoglu

Thank you very much, Dan. It’s my honor and pleasure to be with you again.

 

Banik

I have to say, Daron, I am in awe of your productivity, I mean, you churn out books and articles faster than I manage to do a podcast episode. I mean, you’re not human. You may have cloned yourself. I don't know how you do it, so maybe that’s something you can you can tell us about a little later; share your secret. But when you were last on the show, Daron, I asked you; actually, I think we began the conversation by talking about how you understood “development”. And I want us to start today by talking a little bit about how you understand the term “progress” because this is often defined as good change, purposeful change right? Progress that makes our lives more bearable, more livable. Is that how you understand progress?

 

Acemoglu

Well, I think I would say that thinking about progress has been a journey for me and in fact, the research I report on in the new book with Simon Johnson, Power and Progress, our thousand-year struggle over technology and prosperity, is only a part of that journey for me. Because in some sense I started, like many social scientists and influenced by Western tradition, that there was a powerful set of forces pushing us towards progress. Of course, political failure, economic failure, and social failure have always been part of my interests. I saw them as ubiquitous, but still in some sense anomalous as well, because there were these opportunities, technological and social, for modernization, for change that I thought were extremely powerful and ultimately those powerful forces should have a major impact, even if we don't have a teleological view of economic development and political development; even if we reject the side sort of simplistic views like “the End of History” or whatever. There was a sense that these amazing technological breakthroughs are going to create forces that will benefit everybody, and that's sort of baked into the thinking of most economists, because of the models that we build and also because of the way that we read history. And if you look at the long sweep of history, you cannot but conclude that we are amazingly fortunate for having been born in the late 20th century rather than, say the early 18th century. We are so much more prosperous, and lead comfortable, healthy and enriched lives that it's difficult to say technology isn't the most positive force in recent human history. But the more you dig into history and the more you dig into what's going on today, I think you are forced to entertain 2 conflicting ideas in your mind at the same time. One is that indeed we have been amazingly fortunate in terms of the medical advances, technological breakthroughs in the production process, and improvements in infrastructure. And those are fruits of industrial technology and scientific knowledge applied to every aspect of our lives. But at the same time, technology isn’t equal to progress. Technological advances destroy a lot of useful customs. They destroy a lot of accumulated knowledge and understanding, and they sometimes, in fact, quite often, create additional new dependencies in terms of those who control technology, those who shape its direction, becoming dominant and exploiting that position for their benefit, and at the expense of many people, especially workers and the economic forces that are often second nature to us as economists and part and parcel of our models, which aren’t always as powerful. To overcome these political and social factors in particular, even if that’s the case in the beliefs of most economists and in some of our models, that technological changes that increase average productivity should translate into higher wages; that doesn’t seem to be something that happens regardless of the direction of technology and regardless of the nature of institutions in which these technological changes are embedded. And that’s the background. That’s the way for us to think about progress. And Simon and I take a very clear position on this. For us, progress has to be accompanied by shared prosperity – prosperity that lifts many, many different groups in society at the same time; it’s not enough for GDP to increase. It’s not enough for the tech parents to become richer.

 

Banik

We’ll return to this focus on tech later. For the YouTube viewers, this is the version that I was able to get hold of at a wonderful bookstore in Delhi. Daron, as you know, I recently spent two months in India and one of the things that I thought of when I was reading this book in India – I was taken back to my boarding school days, when I was a prefect and I had to give a speech. And by the way, it was not the content of the speech that was important. It was whether and how well we were able to deliver it, whether we were stumbling or fumbling, which made people laugh. But I remember distinctively that the topic of my speech, this is 1986 or 1987; the title of my talk was “Is Progress Assured”, which was inspired by an essay I had read. I wasn’t reading very much, but this was an essay by Bertrand Russell. And so, in that essay, I remember Russel was saying that progress isn’t inevitable. We can’t take it for granted. It must be more deliberately planned. That we should be relying much more on social science. We can’t just trust what happened in the past. And as I understand from your book, we need some sort of a deliberate movement that progress entails a human-led process. It just can’t be automatic, right? That this concept of shared prosperity that you're talking about; it can’t be just led by a small group of people. It should be broad based.

 

Acemoglu

Absolutely, absolutely. It takes work and I think Bertrand Russell is actually one of the most inspiring figures for me. I think he’s a wonderful intellectual and a very honest and creative person. But also, I depart from the view of Bertrand Russell that we, the intellectuals, have to be at the forefront of that deliberative process. It’s inevitable that technological leaders, politicians, public opinion makers are going to be disproportionately influential. But when it comes to shared prosperity, and now we’re delving deep into the main thesis of the book, it has to be shared prosperity, bolstered by shared responsibility in society. So, it is a process that really depends critically on democratic governance. And it's not like we can trust even the most enlightened tech leaders. Not that I think Sam Altman and Elon Musk are to be trusted under any circumstances, but even if you trusted them as human beings, and if you had good views about their intentions, you shouldn’t trust them because they’re going to be pursuing their own vision of in a shared vision of their social class and of their occupation of their leadership. The critical aspect of our thesis is that technology, which is so important for our existence, for our relationship with other humans and for our relationship with nature, is an extremely malleable thing. There isn’t a preordained path of technology. We make technology and we decide this direction. But who is that “we”? Is it Elon Musk and Sam Altman and Mark Zuckerberg? Or is it us as society that really matters? It matters even more because again, our emphasis clarifies that technologies have huge distributional consequences. Sure, productivity increases, but by how much? And who controls it? And who wins out of that? Whose productivity increases, who has the institutional support for getting a bigger share of the pie as that pie is getting bigger? All of these questions mean that some sort of shared participation in deciding, determining, and influencing the direction of technology is critical.

 

Banik

Even consultation.

 

Acemoglu

Consultation and voice, sometimes unruly voices. Yes, sure.

 

Banik

I've heard you say this before that you’re not against technology as such, and we agree that technology has made our lives better. Our forefathers did not have it as good as we do. We are living longer, we have better health, better access to medicines. All of these innovations have been very helpful. The fact that you are in Turkey and I’m in Norway today, we are communicating. But I do see in the book that you are cautious. You warn us to be not that optimistic, that unbridled optimism is dangerous. We’ve discussed this before during our last conversation on the show, that there’s so much doom and gloom these days that we need optimism. And if there’s one area where you get that optimism, it is how technology can help resolve the climate crisis. Climate change or global boiling is perhaps one of those areas where we feel there’s no hope and the only hope insight is perhaps technology. So, you hear a lot of our colleagues talking very warmly about how technology is going to solve this problem. In this sort of situation, would you see that there are grounds, in situations where there’s no hope, it’s just despair – that we have to somehow, it’s sort of a human thing, to be optimistic about the role of technology?

 

Acemoglu

Well, look, there are some things I am optimistic about. First of all, yes, I am definitely not against technology. I see our future intertwined with technology. And I think the right types of technologies would lift us up, would make us more democratic, and would make us more equal. Moreover, I am also optimistic that the types of socially beneficial technologies. We can talk more about this, but I’m already given the basic outlines of those, the types of technological changes that are socially beneficial and are feasible. For example, AI and generative AI can have tremendously human complementary roles. They can even have pro-democratic roles, so those are technologically feasible. On the other hand, the book is against techno optimism and techno utopia, and it is in some sense a corrective against it. Because the way that I interpret techno optimism, at least it’s more pernicious form, is that you don’t need to worry about it. It will by itself automatically, inexorably take us to a better place, and what it does is that it pacifies us. By the way, it’s not the only thing that pacifies us and makes us complacent. Doom and gloom does too. The US media is mesmerized by technology, and it is dominated by a dichotomous in fact, a false dichotomy, which creates those who are amazingly optimistic about technology. It will solve all of our problems, from climate change to pandemics to inequality, from inequality to poverty, and those who think that we are inevitably heading towards an apocalypse, apocalyptic advances in artificial intelligence that will bring killer robots or super intelligence that is threatening. What’s common among both of these perspectives is they elevate the tech sector because they are either bringing us all the good or they are so smart that they are creating super intelligence. Hence both of them have great reception in the corridors of power in Silicon Valley and Washington DC. And they also abrogate us of any responsibility, because there’s nothing else we can do either. We just let everything work out for the good. Or the world is going to hell and there’s nothing we can do about it. But as we have already established, my perspective is that we have to get involved and there are very important, consequential choices. So, this false dichotomy is not just false, but it’s dangerous. It makes us passive. It makes us complacent.

 

Banik

And maybe part of the problem is that we have in politics or in tech, people who are very good at persuading us. So, the power to persuade, I think, is important here. And I like that section in the book where you talk about how some people, whether it is a political leader or a tech leader, have a vision. And that vision can be a form of power and power in itself can be also a way of looking at things in the future. And some of these people have a past track record. They’re successful. They have influence, they have confidence and they’re able to persuade us that whatever they are articulating is interesting and important. But as I see it, you’re trying to say we should be taking back the agenda-setting function. We should be articulating our interest better and resist temptations or attempts by others to suppress information.

 

Acemoglu

Absolutely, 100%. You said it so well, I couldn’t put it any better, but let me just amplify that point. In the book we discussed one other, I mean several other, technological transitions where, you know, forces that push towards inequality have been very strong. One of those is the cotton gin from the US, which revolutionized southern agriculture, made it from an agricultural backwater to the biggest exporter in the world of cotton, which then became, of course, the main input for the British textile industry that was so important for the global economy at the time. This was all made feasible by an important set of advances culminating in the cotton gin of Eli Whitney. Amazing; productivity skyrocketed. Hundreds of land owners became fabulously wealthy. Tens of thousands of middle managers, skilled artisans became very wealthy. Small owners of land became very prosperous. But what about the workers? The workers who were producing cotton on cotton plantations were the enslaved people. Their conditions were much worse. They were moved to the Deep South, conditions got much worse. Coercion increased. Why is that? Well, it doesn’t take a genius to work it out. This was an amazing breakthrough, but happening in an institutional setting in which powers were very unequally distributed. Coercive power was in the hands of Saturn land owners. And they tightened coercion. You don’t expect; you shouldn’t expect that economic forces by themselves are going to ensure that wages are going to increase. In fact, they didn’t no such thing. But if you want to understand, why is it that sudden planters had so much money – some of it was their control of means of coercion. They had the guns, they had the guards, they had the support of the local sheriffs and politicians. But they also had persuasion power they had. They convinced the entire country to go along with their, you know, strategy or project of slavery. And if you fast forward to today, the guns and the tanks and the sheriffs have become much less important and the power to persuade has become much more important today. The consequential decisions are about the future direction of digital technologies, of communication technologies, of AI, some outline or Mark Zuckerberg or Elon Musk have a disproportionate effect. They’re not because they have tanks or private armies, but because they have persuaded us of their common, influential, attractive vision.

 

Banik

Yeah. I think what I was perhaps highlighting was more of the soft aspects maybe of persuasion, whereas coercion is hard power.

 

Acemoglu

Exactly that’s it.

Banik

The threat of violence is a great way to persuade people to, you know, fall back in line. But to come back to the historical overview that you provide, which is wonderful (lovely pictures, by the way in the book). So, there are all of these inventions, interventions, innovative things that have taken place over the years in agriculture with the ploughs or irrigation system, crop rotation, you name it. You mentioned textile factories, canals, the Ford Motor Company. All of these fascinating case studies that you write about in the book. Fantastic. What do you think is the sort of common underlying feature in these historical examples, Daron? Because you often talk about how important history is. Is it that the machinery, all this progress, turns workers into cogs? Is it that they were controlled by a few? That when labor was able to push back, it was able to get better wages. The Ford Motor Company actually provided, was it $5 a day way back, you know, which was fantastic. Is it that control over technology that is the key? Is that the main argument?

 

Acemoglu

Well, control over technology is key, but the way it works out is intimately related to two pillars of shared prosperity that we highlight in the book. These are not the only two, but those are the two that are, I think most important and hence those that we focus on. And one is about the direction of technology and the other one is about the institutional context and the balance of power. In the direction of technology, it’s about what new machinery, and new advances do. Do they sideline workers, turn them into cogs, as you put it, or automate works, hence making workers less necessary for production? And if that happens, even if some economists believe that there will still automatically be such powerful forces for increasing wages both, careful theory and careful empirical work show that it’s not so when automation is very rapid that creates a much greater advantage for capital bosses and entrepreneurs, and much less gains for work. And it's natural because what automation is doing is to reduce the centrality of labor for production. Wages in a competitive, idealized market are going to be proportional or equal to the marginal product of labor, meaning what labor brings to the table in terms of its contribution. How much more can you produce by hiring one more worker? To your output, you’ll never be willing to pay much more than that for the piece of, for that additional unit of labor. So, that marginal product is key. That contribution is key. But what automation is doing; it’s reducing that contribution. I don’t need the workers anymore. The imagery that is often repeated about the future of the modern factory, it will have two employees – a man and a dog. The man is there to feed the dog and the dog is there to make sure that the man doesn’t touch the equipment. Well, if we are really heading towards that factory, that’s not going to lead to higher wages because factories don’t need the humans. The humans are not contributing much or anything to the factory. So, if that’s the case, it is horrible for labor. Fortunately, that’s not the only possibility throughout history, and that’s why we delve into history so much. We show that there are other technological changes, adaptations, changes in new products, new tasks that increase and elevate the role of labor. And when that happens, laborers’ marginal productivity increases. And the natural market forces are much more favorable to higher wages. So that’s the first pillar – a balanced technology portfolio where we automate work, Of course, that’s always going be with us. It’s been central. But at the same time, we also create new capabilities, tasks and activities for labor. And the 2nd pillar is about the balance of power. Whatever you do to labor, if you increase their productivity, if you increase output, if at the end of the day you have all power, why would you give it more crumbs to the workers? You can just coerce them, shut them up, take all of the gains for yourself, especially if you’re ideologically aligned with that sort of viewpoint, which is another part of this balance of power. So, if you look at history, periods in which prosperity has become shared are those when we see technology go more in the direction of creating new tasks, increasing worker productivity, making more workers central to the production process, and those during which we have countervailing powers. Against capital, against politicians, against tech leaders. So, in Britain, for example, the 1st 100 years of the industrial revolution were not super good for workers. In fact, they may have even experienced lower real wages, harsher working conditions, and shorter lives. But then from around 1850 onwards, things are very different. Why? Because the UK becomes democratic, the trade union movement becomes legal and able to negotiate for workers better working conditions and higher wages, and the direction of technology shifts away. From digging coal mines and automating work in textile mills towards making workers more central in new industries, in railways, in chemical and steel plants. So, it is this both of these pillars coming into their own.

 

Banik

There are two things that I'd like to raise here. One is the fact that maybe we need to distinguish between what is prosperity for a few, shared prosperity for a little more people, and then welfare for many. So, there’s a sort of continuum. The second one is, and I don’t know if I saw this in the book, but I've been thinking a little bit about it, because I love gadgets. I’m a tech guy. I have a smart home. I love automation, Daron. I love the fact that I can control the temperature and lights in my house, the kids usually never turn off the lights ..

 

Acemoglu

So do I. So do I.

 

Banik

But it also means that sometimes I wonder whether all of this technology is actually necessary. Take wearables, for example. It’s great for me to track all my movements. But my watch never tells me that I’m doing too much exercise. I can only do more every day, even though this is not advisable. So, I think the way automation is going; there may be things that we didn’t think of before that we find beneficial, but there are also things – like the lawn mower; I love mowing the lawn – I don’t need a robotic lawn mower. You begin to wonder whether technology is demand driven at all. So, two issues. One is the understanding of shared prosperity versus welfare. And whether technology is always beneficial.

 

Acemoglu

Well, look, I mean I think there are many deep issues and we don’t attempt to deal with all of them. So, in fact we do not define welfare formally because welfare is a very difficult concept. Whose welfare, whose point of view, how do we judge new things? So, we talk about shared prosperity. GDP isn’t enough. We really need to look at the distribution of GDP and the conditions under which workers are enjoying it. Are Amazon workers a picture of happiness? You know they are getting paid $15.00 an hour when comparable employers are paying $12 or $11.00 an hour. So, on that one, you’ll say Amazon is a great employer. On the other hand, it completely robs workers of agency. There’s an extremely regimented workplace, so perhaps it dehumanizes them to some extent. So, those are issues that we do get to into some. But in terms of technology creating needs; I think that’s a very difficult thing to talk about, but I actually welcome that. You know the most the smartest entrepreneurs, the most visionary leaders are those that identify new goods, new products. Technologies that create their own demand and their own needs.

 

Banik

Do they anticipate our needs?

 

Acemoglu

They anticipate or they shape our needs to the extent that that happens in a socially beneficial direction. I think it’s great, but there’s another aspect to it which is very closely linked to the concept of source automation we discussed in the book. Some of these technologies are very marginal, so they improve or almost improve productivity, but there the distributional effects are huge. Let me give you 2 examples. One perhaps a little more controversial, the second one more controversial than the first, but the first one is I think fairly non-controversial automated customer. You know, many companies have saved money by going into automated customer service, but if you look at how much productivity improvement there is relative to what the norm was in the past where humans did it, it's actually very little some of the costs are shifted to the customer themselves.

 

Banik

You mean like the self-checkout kiosks?

Acemoglu

Self-checkout kiosks or when you have your flight canceled, you call the airline, but if they’re talking to a person, you go to 60 minutes talking to an automated system. So those things are what we call source automation. They only bring little benefit in terms of productivity, so their distributional effects are more dominant. Now of course, in the future we might have much smarter machines and then customer service can be automated. But it’s a question of timing. So, I think for some tech innovations, that issue of only marginal improvements is important. The image that I always have in my mind is kids sitting at a table and texting each other. You can talk to each other. So, the value of the text and the technological capabilities of communicating via social media or WhatsApp are minimal. In fact, they may be negative because the more you do that and the more you do, you destroy your real-world social networks. So, I think those are the ones that I would question. But technology creates its own needs. I think that’s something I welcome. That’s part of the creativity of the entrepreneurial class.

 

Banik

I noticed something in India on this trip and that is something I don't know if you've noticed in Turkey. At least it was very different from Norway. People were on the phone all the time. In the gym, in the pool. People were always talking and some of my colleagues actually agreed and they admitted that there was no downtime anymore. 

 

Acemoglu

Well, I think the norms are these are very norm determined in the US, nobody talks on the phone anymore. It's all texting, it's all texting, but it's not any better. You're still on 24 hours.

 

Banik

Same here, yeah.

 

Acemoglu

You know online. So, I think on the phone, you at least hear the human voice.

 

Banik

That’s true. 

It's just that nobody wants to call me anymore … One of the things I also liked is that you distinguish between machine intelligence versus machine usefulness, and for the benefit of the listeners, there are four sets of issues here. One is that machines and algorithms can be steered, you argue, towards increasing worker productivity in tasks they are already performing. Here it is a question of going forward, making better tools available to people. The second has to do with the creation of new tasks for workers. It could be for professors like us, getting used to having ChatGPT around and still making ourselves very useful. The third has to do with making available accurate information for decision makers, which is I think, very crucial. This is where technology can be very helpful. And then finally, using digital technology to create new platforms, and you discuss in the book a wonderful example from Kerala in India where fishermen use mobile phones to actually acquire information about where the demand is in local communities. Those are the some of the examples which you may wish to highlight a little bit more. My question is, despite all of these ways of making machines useful, not just intelligent, why is it that tech companies are not doing so? Why is it that they're not developing tools that help humans and at the same time promote productivity? Why is it that Silicon Valley do not want to do things differently? Is it just because they want to reduce the cost? Is that it? Is it just a question of money?

 

Acemoglu

Wonderful question. And you’ve given an amazingly accurate and powerful summary of our argument in terms of what machine usefulness is machine usefulness through the four channels that you have highlighted. It is about ways of finding new ways of making humans more productive using machines and algorithms. And the reason why we have to dub a name for this machine usefulness is that even though this is a very powerful engine of growth and has been not an uncommon thing in the past, people have forgotten about it. All we talk about today is machine intelligence. And machine intelligence is very different from machine usefulness and to us it has become dominant both because it is profitable for some companies, but also it has become dominant as an ideology, it has become a dominant vision in the tech world. There are a couple of reasons for that, but let me give you the origin story. This goes back to Alan Turing the brilliant British mathematician, who is one of the founders of computer science and one of the founders of artificial intelligence, who already in his early work building on the mathematical, brilliant work by the American mathematician Alonzo Church, was thinking about how to represent and think about computable machines and how to express things algorithmically with a finite number of steps. For example, as a recipe and building on that inside he went to a conceptualization of what computers are about, which led to his ideas of autonomous machine intelligence. Meaning computers becoming sufficiently capable compared to human and in autonomy. This way, where the control is no longer in the human’s hand, but there is enough intelligence in digital technologies that they have a sufficient degree of autonomy to act in human like capabilities and sometimes in capabilities that go beyond humans. Now, if you adopt this view. It's a very peculiar way of thinking about the progress of machines, but it is also a powerful one. It's a very attractive. One, because machines competing against humans goes back to mythology and gets a lot of support from Hollywood and the media, because to be quite frank, if you're going to have a script or a movie about how a computer mouse is useful to humans, that's not going to be very exciting. But if you have one, like Stanley Kubrick discovered about docile looking machines suddenly taking over the human race, that’s much more dramatic and so good stories are those that stick in our minds for a variety of reasons. So, autonomous machine intelligence became a good story, but it also was with the interests of large businesses and the tech sector, because autonomous machine intelligence is just one step removed from automation. If machines are so capable, then they can reach human parity and they can start performing human tasks. And when they do that, they can help managers control labor better, save money, expand their businesses and that is a very profitable way of deploying technology. But sometimes at the expense of workers. So, that became in some sense a very powerful vision fueled by a good story fueled by a good mathematical model and fueled by profitability throughout. Machine usefulness was in the background and, when it was applied, it was actually much more important for important breakthroughs. The computer mouse, hypertext, the Internet, all of the innovations that led to the smartphone and the Apple products – all of these came from a very different approach of making computers more usable, more friendly, more at the service of humans.

 

Banik

It seems to me that what we are really talking about is more democracy, more discussion, more deliberation, more participation, consultation and talking about the difficult stuff. You know that this show is about development and much of this debate on AI (and I know you're interested in the future of AI), is very US and Europe centric sometimes. And when we think about low-income countries, we think of emerging countries, there are concerns about democracy, how AI can be used for different purposes by autocratic regimes. There are all of these fears. But in terms of promoting economic development, in terms of, helping people who are really suffering – people living in poverty – how do you see the future of AI being able to tackle some of the big issues? Much of your book is about inequality, that you can have some people becoming rich and benefiting from technology but this does not translate into broad based prosperity. So how do we using technology address the informal sector or even inequality? Do you think that in the future some of this technology will somehow help some of these low-income countries address problems of poverty and underdevelopment?

 

Acemoglu

Well, first of all, let me start with your penultimate point democracy, because I think that’s very important. I want to talk about that briefly and then I will transition from there to the developing world, which I think is overwhelmingly important. Yes, indeed, there is a natural affinity between the types of machine usefulness and pro-worker, pro-human technological change that I’m advocating in democracy for two reasons. They're not the same, but there’s a natural affinity. First, one of the most pro-human things you can do is to actually empower humans to participate in politics. Have a voice. And it is the same types of technologies that can elevate humans in the production process and also make them more autonomous as citizens. So, I think that is one sense in which there is an affinity between democracy and my emphasis. And second, democracy is the best bulwark against an elite driven vision of technology. So, if we’re going to take back control of technology, it has to be via some democratic means. And in fact, Simon and I claim in the book, that the tech world is fundamentally anti-democratic. They may appear very globalist and very committed to Western values, very liberal in some ways, support the Democratic Party in the United States; but the tech world is entirely anti-democratic when it comes to the most important decisions. Who controls technology? What should we do with workers? Can we completely sideline the rights and views of workers or the public? It’s very anti-democratic when it comes, for example, data collection. Who gave Google and Facebook and all of these other companies permission to use everybody's data without permission? What about the rights of individuals to privacy? What about people's views about where technology should go? All of these are OK to sideline because the view is we are the geniuses. We're gonna chart the core of future of technology. So that's a very important point. That's why we develop, devote a lot of time to democracy and data collection and how centralized information is. And one aspect that makes me concerned about AI is that AI has supercharged the tendency to collect data by these companies because, especially with the craze on autonomous machine intelligence, these companies have been forced or induced to go all the way to: “We're going to try to create wisdom and how we're going to do that by putting more and more data, how we're going to monetize this by collecting more and more data and send, say, individualized targeted ads or manipulate individuals with other means when they visit our platforms”. So, out of this automation economy, that was the most powerful inequality engine of the 1980s, nineties and 2000s. Now we’ve created a data control economy that is also fueling inequality. So that's very, very important. And that's why we have to approach this problem holistically, both from the economic, social and political point of view. 

Now, when we come to the developing world. I think there are two very, very important points to bear in mind. First of all, my previous books were very much targeted at understanding the issue of long run development inequality across countries; why countries like India and Pakistan or Turkey or Mexico are not developing enough. This book is much more about the Western nations who are and Japan and South Korea and China to some extent that are at the technology frontier. So, I have not dwelt as much on the developing world, but there are two lessons from this for the developing world. One of them is that technology is an imperative for the developing world. You have to use technology and you have to use technology well, so that's an educational problem. It's an institutional problem. It's a problem of the management, the right management of companies import of technologies. Any country that's say below 15,000 per capita in GDP per capita, the most important thing is to use technology well and catch up quickly in some technological advances. That being said, there is much greater nuance here. What types of technologies and where technology is going. And to bring this to clarity in the book, Simon and I go and revisit a very important set of ideas from the 1970s in the development economics literature, the literature on appropriate technology or inappropriate technology. So that body of work that emerged that was very interesting in the 1970s, in fact going back to the 60s recognized that industrialized nations were at the technological frontier and the technologies that they formed, they chose. We're going to influence what the developing countries would end up using and having access to and in this context, they worried about capital labor ratios. They worried that the technology is being developed, say in the US or Switzerland were very capital intensive whereas say agriculture or small business or even medium sized businesses in India or Pakistan or Nigeria could not afford that degree of capital intensity. So, that was the basis of the inappropriate technology literature, the technologies of the West were too big for in terms of client size and for two capital intensive, what Simon and I say is that the current direction of AI. It is potentially the mother of all inappropriate technologies because it is trying to save on labor, especially mid-skilled labor that's abundant in countries like India, for example. Think of the offshore services industry in India. It will reshape the international division of labor in a way that is not advantageous to the developing world. So, the ideas of the inappropriate technology literature have become, in our minds, even more relevant and central. This does not deny 2 things that are also very important. First, that for some other problems AI is going to be very useful to the developing nations for control of pandemics, climate change, perhaps translation, so Indian workers may not be so great for them, but if you're a worker in Indonesia, the fact that large language models can be very good at high quality translation is going to be super useful. If you want to sell your labor in an online platform accessible to English speaking firms. So, there are ways in which AI is going to be useful to these countries, but the biggest thing it can do is to go more in the direction that I have outlined as machine usefulness. The more technology in the West goes in a more pro human direction, the more it will be useful to developing nations to workers in India, Pakistan, Malaysia. And this is the reason why I think what happens in the United States is of utmost importance to developing nations and it's one of those policy failures, in my opinion. Emerging world politicians are not paying any attention to this they should. It's not just who's gonna get the next contract and what's gonna happen with inflation in India, that's important, rr in Turkey, that's important. It's also who's going to shape the future of AI, how that AI is going to influence Turkey and India and Pakistan, in Latin America.

 

Banik

It seems to me that so far technology has been actually very useful to the middle class in India and China, the upper classes, and what I noticed from my recent trips is that the gap is increasing. So, you have people using technology for business, for making money, and then you have the service class. And that is for me the big challenge, the inequality gap. Congratulations again on yet another phenomenal piece of work. I take it that we have to encourage civil society NGOs, you have to have people speak up, demand what is rightfully theirs. And again, going back to the first set of issues, we talked about steering this technological advancement to facilitate what you term broad based prosperity.

Acemoglu

Absolutely great summary and let me add that it's also that we need to find vehicles for the voice of developing nations to be heard here. Perhaps the United Nations should play that role. Perhaps a new agency should bring disparate developing nations together so that they can develop their own powerful voice about the future of technology. It's not just the US and China. It's not just a few entrepreneurs in Europe, plus the richest nations in the world, who should determine the future of technology.

 

Banik

Daron, it's always such a pleasure to speak with you. Thank you so much for coming on my show again.

 

Acemoglu

Thank you for giving me the opportunity to speak to you about this Dan. Thank you.

Â