A deep dive into how artificial intelligence is becoming humanity’s newest tool for preventing wars before they start
The year is 2025, and the world faces more potential flashpoints than at any time since the Cold War. Yet something remarkable is happening behind the scenes. Artificial intelligence, the same technology that powers our streaming recommendations and smartphone assistants, is quietly becoming one of humanity’s most promising tools for preventing wars before they start.
While headlines often focus on AI’s military applications and autonomous weapons, a quieter revolution is taking place in offices at the United Nations, research labs in Scandinavia, and humanitarian organizations around the globe. These groups are harnessing machine learning and data analytics not to wage war more efficiently, but to stop conflicts from erupting in the first place. The results suggest we may be entering an era where algorithms can spot the warning signs of violence months or even years before the first shot is fired.
Reading the Tea Leaves of Conflict
For decades, diplomats and intelligence analysts have tried to predict where the next conflict might break out. They studied history, monitored news reports, and relied on human intuition. But conflicts are complex systems involving dozens of variables: economic conditions, political tensions, ethnic grievances, climate pressures, and more. The human mind, brilliant as it is, struggles to process such vast amounts of information simultaneously.
Enter artificial intelligence. Modern machine learning systems can analyze massive datasets that would take human analysts years to review. They can spot subtle patterns invisible to the naked eye, detecting the combination of factors that historically preceded violence.
One of the most sophisticated examples is the Violence & Impacts Early Warning System, known as VIEWS. Developed by researchers at Uppsala University in Sweden and the Peace Research Institute Oslo in Norway, VIEWS employs AI and machine learning algorithms to analyze large datasets, including conflict history, political events, and socio-economic indicators. These algorithms are trained to recognize patterns that precede violent conflict, using both supervised and unsupervised learning methods to make predictions about the likelihood and severity of conflicts up to three years in advance.

Think about that for a moment. Three years. That’s enough time to deploy peacekeepers, facilitate diplomatic negotiations, or provide economic assistance to address grievances before they explode into violence. It’s the difference between preventing a war and cleaning up after one.
The system doesn’t just make vague predictions either. VIEWS uses predictive analytics to identify potential hotspots, where specific factors such as spikes in political unrest or economic instability suggest a higher risk of conflict. This granular approach allows international organizations to target their limited resources where they’re needed most.
Fighting Over What Lies Beneath
Some conflicts aren’t about ideology or territory but about something even more fundamental: water. In regions already strained by climate change, underground water sources can mean the difference between survival and displacement. And displaced populations often create the conditions for conflict.
This is where space technology and peacebuilding intersects. Lunasonde, a space technology company, is using its AstroGPR™ satellite remote sensing radar to detect underground water sources that remain undetected. By creating a map of the water supply below the earth’s surface, the team aims to increase the water supply in communities facing water scarcity with a focus on remote regions. The logic is simple but powerful: if communities have access to adequate resources, they’re far less likely to fight over them.
It’s a form of conflict prevention that attacks the root cause rather than managing symptoms. And it’s only possible because AI can process the complex radar data from satellites and identify patterns that indicate water deep underground.
Picking up on the Digital Chatter
Walk into any intelligence agency today, and you’ll find analysts monitoring social media. But there’s a problem: the sheer volume of data is overwhelming. Millions of posts, tweets, videos, and messages are published every hour. How do you separate the noise from the genuine warning signs?
AI systems are increasingly being deployed to monitor social media platforms and online sources for signs of escalating tensions. These tools can process enormous amounts of data in real time, identifying patterns and anomalies that might indicate trouble brewing. A sudden spike in inflammatory rhetoric in a particular region, combined with reports of ethnic tensions and economic hardship, might trigger an alert for human analysts to investigate further.
The key is that AI doesn’t replace human judgment. It augments it. The algorithm flags potential concerns, but experienced diplomats and analysts make the final call about whether intervention is needed and what form it should take.
When the Time is Right
Here’s a challenge that has long puzzled peacemakers: how do you know when conflicting parties are actually ready to negotiate? Push for talks too early, and you waste political capital on doomed efforts. Wait too long, and more people die needlessly.
Academics have long used frameworks like Zartman’s Ripeness Theory to understand when conflicts are “ripe” for resolution, typically when both sides realize they cannot win militarily and are suffering from the costs of continued fighting. But applying this theory has always been more art than science, relying on subjective judgment.
Project Didi uses AI, machine learning, natural language processing, and large language models to operationalize Zartman’s Ripeness Theory, transforming this academic theory into a quantitative, data-driven tool. The system offers a real-time method for identifying moments when conflicting parties are ready for negotiation. In its pilot phase, Project Didi analyzed the Northern Ireland conflict. Now it’s being applied to the Israeli-Palestinian situation, processing vast amounts of data from news media, social media, and other sources to spot those critical windows of opportunity.
The implications are profound. Mediators could use such tools to time their interventions precisely, engaging parties when they’re most likely to be receptive rather than wasting diplomatic efforts when conditions aren’t favorable.
Deliberative AI: Finding Common Ground
One of AI’s most promising applications in peacebuilding involves something called “Deliberative AI.” This approach can support peacebuilding by facilitating inclusive dialogue, synthesizing diverse perspectives, and helping to identify common ground among conflicting parties. This innovative approach aims to enhance community engagement in peace processes by amplifying marginalized voices and facilitating more inclusive dialogues.
Imagine a peace process where AI helps ensure that minority voices aren’t drowned out by dominant groups. Or where the system can process thousands of perspectives and identify areas of unexpected agreement that human facilitators might miss. This isn’t about replacing human mediators but about giving them better tools to do their incredibly difficult work.
At a discussion hosted by Harvard Kennedy School’s Belfer Center, experts explored how AI could assist mediators. Dr. Jeffrey Seul noted that in his work as a mediator, he has employed Large Language Models to assist with conflict analysis in contexts where parties hold divergent worldviews. LLMs can identify blind spots, encourage perspective-taking, and support the generation of options to help resolve differences.
Some research suggests this approach works. One study found AI-generated proposals are clearer and less polarizing than those written by human mediators. The AI lacks the emotional baggage and unconscious biases that even well-trained humans bring to the table.
When Crisis Strikes
Prevention is ideal, but some conflicts still erupt despite our best efforts. When they do, AI is proving valuable in a different way: making humanitarian response faster and more effective.
The Danish Refugee Council’s Data Entry and Exploration Platform, known as DEEP, is an online platform that provides open data and analytical tools to humanitarian organizations and other organizations supplying aid during crises. The platform uses generative AI and natural language processing to automate tasks such as summarizing and tagging information that can be time consuming and costly.
In a crisis, speed matters. Every hour delayed in getting aid to people means more suffering. Traditional methods require humans to manually process reports, categorize information, and identify priorities. This takes days or weeks. AI systems can do much of this work in hours, freeing humanitarian workers to focus on actual aid delivery rather than paperwork.
The system also helps organizations avoid duplication of efforts and identify gaps in coverage. If five groups are all planning to deliver food to one refugee camp while another camp gets nothing, the AI can spot this pattern and alert coordinators to adjust their plans.
Cleaning Up the Deadly Aftermath
Even after peace agreements are signed, conflicts leave dangerous legacies. Landmines and unexploded ordnance kill and maim civilians for decades. Traditional demining is slow and dangerous, with human deminers risking their lives to clear land meter by meter.
Aerobotics7’s advanced AI-driven drone-based solution called EAGLE A7 detects, tracks, and helps neutralize hidden threats after a conflict. Aerobotics7 uses AI, sensor fusion, and autonomous systems to offer an alternative to traditional demining methods, which are often slow and put human lives at risk. The team is currently working to apply their technology to help make contaminated areas of Ukraine livable again.
The drones can survey large areas quickly, using AI to analyze sensor data and identify likely locations of buried explosives. This doesn’t eliminate the need for human deminers, but it makes their work safer and more efficient by telling them exactly where to look.
The Technology Lag in Diplomacy
Here’s an uncomfortable reality: while military forces around the world are racing to adopt AI for warfare, diplomacy is lagging behind. Martin Wählisch, former Lead at the UN Department of Political and Peacebuilding Affairs Innovation Cell, noted that government technology spending is projected to double by 2034, particularly in security and defense. AI has demonstrated versatile applications in warfare, including natural language processing, deep learning for aircraft recognition, AI-assisted geospatial analysis, and social media mining. Diplomacy, meanwhile, lags behind.
This technology gap is concerning. If militaries have sophisticated AI tools while diplomats are working with outdated systems, it tips the scales toward military solutions. The international community needs to invest as heavily in AI for peace as it does in AI for war.
Some cutting-edge tools are emerging. Technologies such as Extended Reality are opening new possibilities for immersive peacebuilding. Digital twin technology, or virtual replicas of real-world environments updated with real-time data, allows us to better simulate complex negotiations and their potential outcomes. Meanwhile, digital deliberation tools, including surveys and focus groups, create avenues for more inclusive and structured public engagement.
Imagine negotiators being able to virtually walk through a disputed border region together, seeing the same terrain and communities, before making decisions about borders. Or running simulations of different peace agreement terms to see how they might play out over time. These tools could help negotiators understand the consequences of their choices before committing to them.
The Dark Side of the Algorithm
It would be naive to present AI as an unalloyed good for peace. The same technologies that can prevent conflicts can also be weaponized. AI-powered disinformation campaigns can inflame tensions. Autonomous weapons systems raise profound ethical questions. And there are more subtle concerns too.
Dr. Seul acknowledged the limitations of AI particularly when addressing conflicts rooted in sacred values or fundamentally different belief systems. He posited that this creates concerns regarding the oversimplification of moral reasoning and the loss of cultural nuance. Not every conflict is a problem to be optimized. Some involve fundamental questions of identity, justice, and morality that resist algorithmic solutions.
There’s also the risk of what scholars call “AI Solutionism.” Seul cautioned against the tendency to believe that AI tools, simply because they exist, can resolve deeply human centered conflicts. AI must be positioned as a support to human judgment, not a replacement for the nuanced cultural, historical context, and live human empathy that a peace process would demand.
Bias is another serious concern. AI systems learn from historical data, and if that data reflects past prejudices, the AI will perpetuate them. A conflict prediction system trained on biased data might flag certain ethnic groups or regions as high risk simply because they’ve been discriminated against in the past, creating a self-fulfilling prophecy.
Martin Wählisch emphasized the increasing dominance of private companies in the AI ecosystem. Large technology firms behind AI models may inadvertently embed corporate interests into conflict mediation tools, raising questions about neutrality, access, and long-term governance. If a handful of tech companies control the AI systems used for peacebuilding, what happens when their interests conflict with peace? Who ensures these systems are transparent and accountable?
Building AI Literacy for Peace
If AI is going to play a major role in conflict prevention and resolution, we need people who understand both peace and technology. Currently, most diplomats and mediators have limited understanding of AI, while most AI developers know little about conflict resolution.
The experts at the Belfer Center discussion stressed the need for a dual approach: leveraging AI’s capabilities to enhance human decision making while remaining vigilant about bias, disinformation risks, outside corporate influence, and oversimplification. Building AI literacy among mediators, policy makers, and the broader public will be crucial.
This means training the next generation of peacebuilders to work alongside AI systems, understanding both their potential and their limitations. It means involving peace practitioners in the development of these tools from the beginning, not just having tech companies build systems and then try to sell them to the UN.
Crucially, it means ensuring global collaboration. Both the Kluz Prize for PeaceTech and NYU events emphasized the critical need for global collaboration in shaping the future of AI in peacebuilding. This collaboration must be truly global, ensuring that voices from the Global South are central to developing and implementing AI solutions for peace.
Too often, technology is developed in wealthy Western countries and then exported to conflict zones in the Global South. This creates tools that may not work well in different cultural contexts and reinforces existing power imbalances. True AI for peace requires genuine partnership, with communities affected by conflict helping to design the systems meant to protect them.
The Chess Analogy
When computers first began beating humans at chess in the 1990s, many feared it would kill the game. Why would anyone bother playing or studying chess if machines were simply better?
Instead, something unexpected happened. Just as AI’s entry into fields like chess elevated, rather than displaced, human talent, there is an opportunity now to harness AI to strengthen negotiation processes, leading to more inclusive and informed pathways to peace. Top chess players began using AI as a training tool, and the quality of human chess actually improved. Humans and AI working together produced better results than either alone.
The same principle could apply to peacebuilding. AI systems can process vast amounts of data, spot patterns, and generate options. But humans bring judgment, cultural understanding, empathy, and moral reasoning. The goal isn’t to replace human mediators with algorithms but to give those mediators better tools.
A More Peaceful Algorithm
The trajectory of human civilization has been, broadly speaking, toward less violence. Steven Pinker and other researchers have documented a long-term decline in war deaths per capita, even if the absolute numbers remain horrifying. Could AI accelerate this trend?
The early evidence is cautiously optimistic. Early warning systems are getting better at spotting conflicts before they erupt. Humanitarian response is becoming more efficient. New tools are helping mediators find common ground in seemingly intractable disputes. Resources like water are being located before communities fight over their absence.
But technology alone won’t create peace. AI is a tool, and like any tool, its impact depends on how we use it. The same machine learning that can predict conflicts can power autonomous weapons. The same data analysis that helps target aid can enable targeted violence.
The choice is ours. We can pour resources into developing AI for war, or we can invest in AI for peace. We can let a handful of tech companies control these systems, or we can ensure they’re governed transparently with input from affected communities. We can use AI to entrench existing biases and power structures, or we can design it to amplify marginalized voices and challenge our blind spots.
The United Nations, humanitarian organizations, research institutions, and tech companies are at a crossroads. The tools to prevent conflicts before they start are becoming more sophisticated and accessible. The question is whether we’ll muster the political will to use them.
History may look back on this era and note that humanity finally developed the capability to see wars coming with enough clarity and time to stop them. Whether we actually did so will depend on choices made in the next few years by policymakers, technologists, and citizens around the world.
The algorithm of peace is being written right now. We all have a role in determining what it says.