AIs are not just arriving, like some invading aliens from outer space. No, AI is developed by humans. So, if it turns out to be dangerous, we can just stop, or at least slow down a bit, right?
Well…
AI has too high potential, making it difficult to avoid an arms race. When AIs get a bit smarter, the nation with the most sophisticated AIs will get an edge in military power through military applications of AI, such as in cyberwarfare.
US and China are the major players, with most money and most compute. Can they avoid a suicide race?
As discussed in a previous post, Probability of AI-Caused Disaster, the probability of AI-caused human extinction varies wildly between different estimations. There is for instance this Manifold prediction, estimating a 14% chance by 2100, and this 2% estimate by Metaculus for the same year.
Regardless of the exact probability, international coordination could significantly reduce the probability of disaster, both x-risk and smaller threats. In this forecasting report from 2023 by Samotsvety, an expert forecasting group, it was estimated how much a treaty, and an international oversight institution to monitor treaty compliance and facilitate cooperation, would lower the risks if both USA and China participated. It was estimated that it could reduce the risk by over 50%, from 30.5% chance of “an AI catastrophe resulting in the death of the vast majority of humanity by 2200” to 14.5%.
But is such international cooperation realistic? Samotsvety considered it highly unlikely to occur before 2026 when they made their estimations in 2023. Let’s see what the forecasting communities think.
Short reminder:
Please refer to the latest predictions in the embedded pages rather than relying on the ‘At The Time Of Writing’ (ATTOW) predictions.
Please suggest markets/questions missing from this post, or from the forecasting platforms, in the comments.
Visit the forecasting platforms to make your own predictions!
AI Treaty
There are arguably a large number of concerning issues with future AI that needs to be solved diplomatically. Perhaps humanity can at least coordinate around a few of them. Autonomous weapons, for instance?
Autonomous weapons, or “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator”, could significantly alter warfare dynamics, and allow AIs to kill without needing human approval.
Although I’m currently more concerned about even worse scenarios than autonomous weapon warfare, the low estimated probability (11% ATTOW) is discouraging. If we won’t even get a treaty on this, the probability of even more restrictive treaties to deal with other dangers seem low.
To actually deal with an arms race, there probably needs to be a treaty on AI development, not just specific AI applications. There are a couple of relevant Metaculus questions for this:
Both questions allow legally binding treaties as well as non-binding agreements. The first one asks whether there will be a treaty at the in the following years; it only resolves to Yes if there is a treaty on January 1 of the given year. The second one asks instead about whether a treaty has been signed before January 1 of the given year, regardless of whether it has later been terminated. Since the probability estimates for the second question are considerably higher, the Metaculus forecasters apparently consider it quite likely that even if a treaty is signed, it will not last. For instance, there is currently an estimated chance of 35% of a treaty being signed before 2030, and 22% chance that there is an existing treaty at the beginning of 2030. This indicates a 35% - 22% = 13% probability of an unsuccessful treaty before 2030.
International Regulatory Body
Even if there is a treaty, would that even help if compliance is not properly monitored and enforced? Will there be a regulatory agency like IAEA but for AI? This question is on both Manifold and Metaculus, with identical resolution criteria:
This question does not require both USA and China to be involved, however. The criteria specifies that “at least 20 countries accept the statutes of an actively operating organization that regulates transformative AI projects by monitoring the capabilities of AI labs in those countries”. For instance, if the EU starts monitoring transformative projects for labs located within the EU the criterion would be fulfilled. The EU countries seem to be lagging behind the capability frontier, though.
It’s intriguing that not only are the predictions for the identical question quite different (73% on Manifold and 50% on Metaculus); the Manifold probability estimate seem to steadily increase while the Metaculus estimate is decreasing over time. Very curious.
There is also this question, which unfortunately doesn’t have a very specific resolution criterion. This might also be the reason that not many traders have bet on it yet.
‘Global’ probably means that both the US and China are required to accept the statues of the organization, but it is up to interpretation of the question creator.
Conflict
Since I couldn’t find many predictions on forecasting platforms regarding conflict using AI, I will just speculate a bit about it instead. I’ll post some questions on Manifold and revisit this topic in the future. This section won’t contain any forecasts—instead it collects some important context and topics that should probably receive more attention by forecasters in the future.
While USA and China are in the lead of frontier AI development, they are likely to allow allies access to some of their military AI capabilities. I’ll share some well-expressed considerations about this from this post by Anton Leicht:
Especially if nations feel that an historical strategic adversary is about to gain a decisive advantage – e.g. because they will be given AGI by a great power –, a dozen uneasily pacified conflicts could reignite. This is what happened in 1914, when the German Empire, fearing the industrialization of the Russian military apparatus, willfully escalated tensions into an all-out ‘preventative war’, or 1967, when a ramp-up in Egyptian military capabilities motivated an Israeli first strike; and the incentives are set to be stronger still when talking about paradigm-shifting technology like truly advanced AI.
The above problem is of course not unique to AGI but applies to any AI with significant military advantages.
One of the reasons that a race between USA and China seems probable is the potential applications of AI in conflict. I identify at least four categories: autonomous weapons, AI-assisted military decision making, cyberwarfare, and weapons development.
Autonomous weapons
The most obvious might be autonomous weaponry. There are military unmanned ships, killer drones, AI controlled machine guns to counter the killer drones, and battle dogs, just to name a few examples. I can recommend The Autonomous Weapons Newsletter if you want to stay updated or learn more about such technology.
Autonomous weapons lower the threshold to kill; no human needs to make the hard decision to pull the trigger. Killer drones could target specific people or groups. Since nations using autonomous weapons don’t need to send out soldiers, warfare might both be cheaper and more popular with the nation’s population, reducing the threshold for engaging in war. It would be nice if nations in conflict simply started destroying each other’s autonomous weapon systems instead of killing soldiers and civilians, but that’s probably too much to hope for.
AI-assisted decision making
In The Strategy of Conflict, written by Thomas Schelling—a pioneer of game theory and Nobel prize winner—I remember reading some discussion on the speed of decision making, and escalation problems when it is too fast. Even in conflict, nations engage in some coordination. While all parties want to win, they also want to avoid too heavy losses. But coordination takes time; one nation takes one action, observes how the adversary reacts, they predict each other’s next move, make commitments, communicates and deceives. What happens when decision making gets automated, or significantly faster through AI? There is serious risk of escalation, far too fast for humans to be able to communicate and coordinate appropriately to understand what is happening and stop it—similar to how high-frequency trading algorithms may have caused the 2010 flash crash. This video illustrates the problem well.
Cyberwarfare
AI will bring new possibilities of both offensive and defensive cyberwarfare tactics. An article titled AI in Cyber Warfare summarizes key implications:
1. AI for Offensive Cyber Attacks
AI-Powered Malware – AI enables malware to evolve and evade detection by modifying its behavior in real-time.
Autonomous Cyber Weapons – AI-driven software can scan, exploit, and infiltrate networks without human intervention.
AI-Generated Phishing Attacks – AI creates hyper-realistic phishing emails and deepfake voice scams, making social engineering more effective.
Deepfake Disinformation Campaigns – AI-generated videos spread false information to manipulate public opinion.
2. AI in Cyber Defense
Cybersecurity experts use AI to detect and neutralize threats before they cause damage:
AI-Powered Threat Intelligence – AI scans massive datasets to predict and mitigate attacks before they occur.
Automated Incident Response – AI speeds up threat detection, analysis, and containment, reducing response time.
Behavioral Analysis for Anomaly Detection – AI detects unusual activities in networks, preventing breaches.
AI-Enhanced Encryption & Authentication – AI improves password security and fraud detection, making cyber systems more resilient.
Highly sophisticated systems could autonomously replicate themselves, an ability that I reasoned about in my last post. Something that I didn’t mention (because I hadn’t actually thought of it at the time), is the possibility that this is something intentionally exploited as a cyberwarfare tactic. It is the first bullet point in the attack list: AI-Powered Malware.
As cyberwarfare capabilities increase, there might be a shift of attention from conventional physical weapons systems to the digital world. While this might seem like a less violent form of warfare, it shares a critical problem with autonomous weapons systems: it reduces the threshold for attacking, since there is little risk for the attacker. And unlike a clear act of violence—such as deploying troops or dropping bombs—it seems harder to establish red lines and prevent conflict escalation in cyberwarfare. For example, when a hostile nation breaches a critical government database, the damage can be as severe as a traditional military assault, yet it may not be immediately recognized as an act of war.
Weapons development
As discussed under AI-assisted decision-making, issues appear when things go too fast; humans don’t have enough time to coordinate and prevent escalation. But if one nation accelerates its pace, it gains decisive advantage. There is pressure on increasing speed to levels where coordination gets harder. This issue also applies to weapons development.
As AI is getting better at assisting with research; there are for instance search tools like Perplexity, advanced chat bots that can engage in research discussions—like ChatGPT, Claude, Gemini or DeepSeek—and tools with some specific specialty such as protein folding. Now AIs are even starting to do basic research themselves, as OpenAI’s and DeepMind’s deep research.
We should expect weapon development to speed up as well, with potentially very rapidly changing war dynamics and power shifts.
Some Additional Thoughts
The timeline and probability of a treaty or international regulatory body is highly dependent on when we might expect actually dangerous AI systems to be developed. As discussed in this post, AGI might very well be developed before 2030.—my own estimate is in late 2026 or 2027. I think the probability estimates in this post might be lower than appropriate if the forecasters underestimate the capabilities of AI systems in the coming years. When leaders notice how capable AIs are and how fast things are moving, they might initiate a treaty and/or a regulatory body earlier than they currently expect.
Some have suggested that, in addition to a treaty and international regulatory agency, there should be a research organization where the most advanced AI is developed collaboratively by multiple nations. The goal would be to ensure that we get most of the benefits of AI while minimizing risks. See for instance A Narrow Path. I created a question on Manifold for whether such a research initiative will be the first to develop AGI, but it seems like the prediction has not yet stabilized and has only 5 traders so far.
Lastly, let’s consider public opinion on AI. Regardless of whether US and China, and potentially other countries, manages to establish some common rules about AI, it is plausible that the public will have demands on oversight and control—such as a temporary pause in advanced AI development. China might be inclined to sign a treaty, given that they are still a bit behind USA in AI development, and USA might feel pressured to sign a treaty by the American people.
Regarding the question above—whether or not a protest is ‘significant’ is a bit subjective. The criteria define it as a protest that “is a for-real protest and gets covered as such with substantial visibility by at least 3 mainstream media outlets.”
I’ll write more about public opinion on AI in the future; subscribe to stay updated!
Thank you for reading!