Artificial intelligence is transforming healthcare, education, and governance. Dan Banik and Francesco Marcelloni explore the risks and benefits, and why human judgment must remain central in the AI era.
Artificial intelligence is rapidly transforming how societies function — from healthcare and education to governance, public debate, and the future of work. But as AI systems become more powerful and more deeply embedded in everyday life, they also raise important questions about misinformation, democratic accountability, and the role of human judgment.
In this episode, Dan Banik speaks with Francesco Marcelloni, Professor of Data Mining and Machine Learning at the University of Pisa and Academic Director of the Knowledge Hub on AI at the Circle U. European University Alliance. They explore how AI actually works, why the debate around the technology has become so polarized, and what it means for decision-making in governments, hospitals, universities, and businesses.
The conversation examines both the risks and the opportunities of artificial intelligence, including its potential to improve medical diagnosis, support education, and help policymakers analyze vast amounts of data. Dan and Francesco also highlight why preserving human oversight, critical thinking, and democratic accountability will be crucial in the AI era.
Â
[Dan Banik]
Hi, Francesco. Good to see you. Welcome to the show.
[Francesco Marcelloni]
Thank you, Dan. Good morning.Â
[Dan Banik]
There are lots of people who are really concerned about AI. I want to refer to what Yuval Noah Harari recently said at Davos: that AI is not just another tool, but actually an agent. He used the example of a knife. A knife can be used by humans to cut salad, but also to murder someone. But we humans are in charge of the knife. AI, on the other hand, can use the knife for whatever it chooses. So we’ve lost control. Are you worried?
[Francesco Marcelloni]
Thank you for this question. It’s a very hot topic because a lot of people are discussing the role of AI and its impact on society and humanity in general. I strongly believe that AI is a technology. And when we talk about technology, of course, there are benefits, but there are also drawbacks.
The real issue today is that the drawbacks of AI can have a serious impact on society and on humanity. But I think we have to discuss AI from this double perspective. On the one hand, AI can help. We have many examples of how AI can be beneficial. You can think about medicine, and many other application domains where AI is already producing real benefits.
But on the other hand, we should be careful, because AI is very pervasive and can feel almost like interacting with a human. This is one of the big challenges today. When we interact with some tools, it can feel as though we are really interacting with another person. And that “person” is not yet fully under control. So yes, AI has an impact on society and humanity, but we should not consider AI a demon. At least for now, that is not the situation.
[Dan Banik]
My frustration is that some of the debate is very one-sided. You’re either talking about all the benefits, which I like to highlight, or at least in the Western world you’re mostly hearing about the dangers. One concern is democracy: the idea that AI can lie, manipulate people, and feed into fake news, misinformation, and disinformation.
But what I feel is that in our part of the world, while we focus on the dangers, we don’t talk nearly enough about the benefits, which is what much of the Global South is talking about. So the AI discussion, in my view, is extremely polarized.
[Francesco Marcelloni]
That is true. But we should also remember that AI can help reduce some of the very drawbacks it creates. Just to give one example, if we think about deepfakes or fake news, many of them are generated using AI. But we also have AI-based tools to detect them.
So again, AI has this double role today. On the one side, it can help produce fake news. On the other side, it can help detect it and counter its effects. So yes, AI is dangerous in some fields, but we also have to recognize that it brings many benefits and can even be used to counteract some of its own negative effects. This double perspective is very important.
[Dan Banik]
So what you’re highlighting is the importance of human control over technology. That’s what some people have been arguing. I’ve had Daron Acemoglu on the show talking about this: technology is fine, but we humans have to control the evolution. We have to shape the direction it takes.
And I suppose one concern is that a few Silicon Valley elites are deciding that direction. That is one danger. Another criticism I hear, Francesco, is that much of AI development is shaped by the interests of the Western world. So it may be our needs that AI is trying to address, not necessarily the needs of the vast majority of people living in non-Western settings.
[Francesco Marcelloni]
Yes, that is true. But when we talk about the West, I think Europe is also suffering in this regard, because most of the technologies we use, especially large language models, come from outside Europe. Of course, we have some developed in Europe, but most of the models used by the public come from elsewhere.
This creates strategic problems for Europe as well. At the moment, I do not really see a clear solution. There are discussions about strategy in Europe, but we still do not have a clear shared vision.
I agree with you that AI development is still largely driven by the West, even though we also see important developments in countries like China and India. But when most people talk about AI, they mainly mean large language models, because in recent years this is how the public has started to engage with AI.
In reality, AI is much broader than that. We have applications of AI in many different fields. And many analysts believe that the economic impact of what we might call classical AI is actually larger than that of generative AI and large language models.
In any case, I strongly believe that we still have not fully realized the economic possibilities of AI. We have many examples, but not many companies are yet generating major revenues from AI. So from this perspective, the future of AI still has to be explored. We need to better understand how much AI can affect the economy, society, and humanity.
[Dan Banik]
If we think about Europe, though, Francesco, we are a bit stuck between the United States, China, and other countries. Some would say we haven’t done enough—that it’s about time Europe got its act together and started putting in resources.
What should we be doing more of? Is it new legislation to make sure the manipulative potential of AI is kept under control? Should we be investing more in public awareness so people can detect fake news? Where should the effort go if we want to combat the negative aspects of AI?
[Francesco Marcelloni]
Let me introduce two concepts very briefly. When we talk about the risks of AI, we should distinguish between two categories. First, there are risks inherent in AI itself. These come from the data we use in machine learning approaches. Machine learning is the basis of much of AI today. In this case, legislation is not the main solution. What we need is research in order to reduce these problems.
On the other side, we have risks related to the misuse of AI. In this case, legislation becomes important. In Europe, we have the AI Act, which provides a broad regulatory framework. I believe it is moving in the right direction, because there should be limits on certain uses of AI and better mechanisms to control how AI applications are deployed.
But AI is a living technology. For example, in the AI Act, chatbots are considered low-risk applications. But today, many people interact with tools such as ChatGPT, and these tools can influence the way people think. So we should ask whether chatbots should really still be considered low risk.
When the Act was drafted, we were still at the beginning of generative AI. Now generative AI is everywhere. So it may be necessary to revisit some of these regulations in the future. Ultimately, we need to see how these regulations work in practice before we can fully understand their effectiveness.
[Dan Banik]
I think this is a really crucial point. It relates to what some people see as the central danger of AI in relation to humanity: the possibility that AI might eventually think on its own.
We humans pride ourselves on our ability to think critically and creatively. But with chatbots giving advice and nudging people in certain directions, there is a fear of being influenced. Are we there yet? Is AI able to think independently, or is it simply synthesizing information from large language models based on what humans have written over time?
[Francesco Marcelloni]
At the moment, AI cannot really reason as humans do. We can interact with AI because these foundational models contain vast amounts of data. AI responds from a statistical perspective, by combining words and patterns within a given context.
But we do not know exactly what we will have in the future. In any case, it is fundamental that the final decision remains with humans. I very much like a new paradigm discussed in the literature: the co-evolution of humans and AI.
Humans benefit from interacting with AI because AI can process very large quantities of data, something impossible for humans alone. On the other hand, AI needs humans for training and guidance. So this idea of co-evolution is, in my view, the right way to look at the future. Through this process, we can produce development and gain benefits while maintaining human control over AI.
[Dan Banik]
Co-development and co-evolution are crucial. But I suppose what many people wonder is whether AI could become like a runaway train. That it will eventually take over. Right now, we are co-developing. But after a certain threshold is reached, AI could become so superior and so much faster, which it already is in some respects. And humans will no longer be necessary.
[Francesco Marcelloni]
If we think about the capacity to process large amounts of data, that is already the situation today. But when it comes to human intuition and the human ability to solve problems (especially new problems) I do not think that, at least for now, we can imagine an artificial intelligence that can truly operate in that way.
There are some human characteristics that are probably not easy to implement in AI, even in the future. But AI can still interact with us and, in some ways, emulate aspects of our behavior or our thinking. Because of that, we should be very careful about the psychological impact AI may have on people. That is one area where caution is especially important.
[Dan Banik]
We’ve spoken quite a lot about the dangers. It’s about time to flip the script and talk about the benefits. One of the things you and I have written about recently is democracy and governance; the ability of artificial intelligence to go through enormous quantities of data, millions of documents, and synthesize information in ways that humans simply cannot.
That could streamline and improve governance by giving policymakers insights at their fingertips that would otherwise be very difficult to obtain. So Francesco, help my listeners understand: how is it possible for AI to do that? What are the mechanisms? Is it machine learning? What technologies are currently available to support decision-making and knowledge synthesis?
[Francesco Marcelloni]
The key technology is machine learning. Machine learning can process large amounts of data and extract knowledge from it. Research in machine learning actually began about 50 years ago, so it is not entirely new. But today we have much more powerful computers and the ability to store huge quantities of data—something that was missing decades ago.
Now we can process vast amounts of data very quickly in order to extract knowledge. For example, that knowledge might concern what people think about a particular strategy that a municipality or a government is implementing. Using machine learning, we can analyze public feedback in near real time.
Some years ago, we conducted analyses of public perceptions of vaccination using social media data, especially tweets. Interestingly, we started this analysis even before the pandemic. It allowed us to understand the general attitudes people had toward vaccination. This is just one example of how machine learning can extract knowledge and help policymakers understand public opinion.
However, governments should not rely only on this kind of feedback. Politicians must also be proactive. Sometimes they have to make decisions that are not immediately popular but are necessary for long-term societal benefit.
[Dan Banik]
This reminds me of course evaluations. I’ve been teaching one course for 17 years, and the evaluations vary enormously from year to year. One cohort may hate a particular article on the syllabus, while another cohort loves it. As a professor, you have to decide whether to keep it or remove it despite those fluctuations.
Staying on the positive side of AI, you mentioned health earlier. That is an area where I personally see enormous potential. Given my own health challenges in recent years, I’ve become very interested in applying AI in my everyday life.
For example, I use an app called Lifesum to track what I eat. I take a picture of my food, and AI estimates the calories. Of course, I still need to correct it sometimes. It might say I’m eating too much cheese, or it may misidentify the food entirely. But it is extremely useful.
I also used ChatGPT yesterday to help interpret a medical report. These reports are often written in highly technical language. AI helped break the information down into simpler terms. I don’t rely on it for medical judgment, of course, but it was helpful.
In the Global South, there are also applications being developed to detect snakebites, diagnose anemia, and provide medical advice to people living far from health centers. And education is another area where AI offers huge promise. Through apps, students can access high-quality learning resources and receive support even if they live far from major educational institutions.
[Francesco Marcelloni]
Health is indeed one of the domains where AI has progressed significantly. In some applications, AI systems perform extremely well, sometimes even better than physicians in very specific tasks. But we have to clarify what that means.
In many research groups, including mine, we are working on explainable AI. This is important because it increases trust in these systems, both for physicians and for patients. Doctors need to understand why an AI system is suggesting a particular diagnosis.
I recently participated in a roundtable discussion about a book written by a physician that explored the relationship between patients and doctors in the AI era. The author noted that many patients now arrive at the doctor’s office already convinced they know their diagnosis, because they have consulted Google.
This shows how people interact with digital tools. But we must remember that expert knowledge is still essential. Doctors have experience and understanding that cannot easily be replaced by machines, especially when it comes to human relationships and empathy.
[Dan Banik]
I was also thinking about how effective AI can be when combined with humans. Radiology is a good example. There has been quite a lot of research on AI systems that analyze X-rays or MRI scans.
The evidence seems to suggest that AI alone can be effective, and human experts alone can also be effective. But when they work together, the results are often even better. Is that correct?
[Francesco Marcelloni]
Yes, there are studies showing that AI systems can sometimes outperform expert cardiologists or radiologists in detecting certain patterns. But I do not think the goal should be to replace doctors.
Rather, these tools should help physicians work faster and more efficiently. For instance, AI can automatically highlight suspicious areas in CT scans. This allows radiologists to focus their attention on specific regions, which improves both speed and accuracy.
From a regulatory perspective, the final responsibility still lies with the doctor, not with the AI system. That is extremely important.
[Dan Banik]
That’s a very useful example. Radiologists often face enormous time pressure. They have to review large numbers of scans quickly, and that pressure can lead to mistakes. AI could help reduce those errors by acting as an additional pair of eyes.
Another situation is when diagnoses fall into a gray area. In such cases, AI may simply present the data without making a definitive judgment. The physician still has to make the final decision about treatment. So at least for now, humans remain in control.
Let’s move to a final topic that interests us both: education. Universities and schools are currently facing major disruption because of generative AI. Many institutions have had to rethink how they evaluate students. Traditional take-home exams have become problematic because students can easily generate essays with AI tools.
Some universities are now emphasizing in-class assessments or oral examinations instead. There is a lot of anxiety about this. What are your thoughts on the dangers and opportunities of AI in education?
[Francesco Marcelloni]
I believe we are witnessing a true revolution in education. As you said, it is difficult to determine whether a report or assignment was produced by the student or by AI.
But we cannot simply forbid students from using these tools. That would be unrealistic. Instead, we must teach them how to use AI responsibly. They should treat AI as a support tool—like having an expert friend who can help them.
At the same time, they must preserve a critical mindset. In the technology sector, many companies are already using generative AI to produce software automatically. But they still need expert programmers to evaluate whether the generated code is actually correct.
This has important implications for education. Students must develop the ability to evaluate and judge AI outputs. They need critical thinking skills to determine whether an answer is valid or flawed.
At the same time, generative AI also offers real advantages. It can support students with disabilities and help personalize learning. In this way, it can strengthen student-centered education by adapting the learning process to individual needs.
[Dan Banik]
When I was a PhD student, constructing a bibliography required enormous effort. We relied on library searches and primitive websites to track down relevant articles. Then Google Scholar came along and made it easier to find literature we might otherwise have missed.
But the difference today is that students can ask AI to generate a direct answer. The risk is that they skip the process of reading and thinking. They may rely on shortcuts instead of doing the intellectual work.
I’m not entirely sure how we should advise students in practice. Perhaps the key is to encourage them to use AI ethically—for example, by asking it to refine their arguments rather than generate them from scratch.
[Francesco Marcelloni]
I think we must interact with students more directly and assess whether they truly understand the concepts. It is easy now for them to generate reports or software using AI tools. But if you ask deeper questions—about the underlying algorithms or methods—they may not know the answers.
For instance, many students use machine learning libraries without understanding the algorithms behind them. If they do not understand the principles, they cannot properly evaluate whether the results are valid.
So educators must focus on ensuring comprehension. The goal is not simply to produce a report, but to develop critical thinking and analytical ability.
We have experienced technological revolutions before. The calculator, for example, replaced many manual calculations. But AI has the potential to replace far more cognitive functions. That is why we must be careful to preserve essential intellectual capacities.
[Dan Banik]
Looking ahead, Francesco, where do you think AI development will focus in the next few years? What kinds of problems are researchers trying to solve? And should young people entering the workforce worry about certain professions disappearing?
[Francesco Marcelloni]
From a research perspective, there is strong interest in developing artificial general intelligence—systems capable of reasoning more like humans. Many researchers are investigating how to give AI stronger reasoning capabilities and better explanations for its outputs.
In terms of applications, companies are experimenting with large language models across many domains—not just for chatbots, but also for management support, decision analysis, and data interpretation. However, these applications still need to demonstrate clear economic value.
Regarding employment, I think some jobs will disappear. But interestingly, the jobs most at risk may not be low-skill manual jobs, but rather medium- and high-skill analytical jobs. AI can process enormous amounts of data and produce insights very quickly.
At the same time, history shows that technological revolutions often eliminate some jobs while creating new ones. We must hope that this pattern continues.
[Dan Banik]
What about professors? Should we be worried?
[Francesco Marcelloni]
Many people ask me that question. I usually respond that it is possible we might lose our jobs, but hopefully not our salaries.
But seriously, AI will certainly transform many professions. Robotics may also affect manual labor in the future. The overall picture is still unclear. But if history is any guide, new opportunities will likely emerge alongside these changes.
[Dan Banik]
Well, on that optimistic note, Francesco, thank you very much for joining me today. It’s always a pleasure to speak with you.
[Francesco Marcelloni]
Thank you. It was really a pleasure.
Â