Last year marked the 40th anniversary of the almost-apocalypse.
On Sept. 26, 1983, Russian Lt. Col. Stanislav Petrov declined to report to his superiors information he suspected to be false, which detailed an inbound U.S. nuclear strike. His inaction prevented a Russian retaliatory strike and the global nuclear exchange it would have precipitated. He thus saved billions of lives.
Today, the job of Petrov’s descendants is much harder, chiefly due to rapid advancements in artificial intelligence. Imagine a scenario where Petrov receives similar alarming news, but it is backed by hyper-realistic footage of missile launches and a slew of other audio-visual and text material portraying the details of the nuclear launch from the United States.
It is hard to imagine Petrov making the same decision. This is the world we live in today.
Recent advancements in AI are profoundly changing how we produce, distribute and consume information. AI-driven disinformation has affected political polarization, election integrity, hate speech, trust in science and financial scams. As half the world heads to the ballot box in 2024 and deepfakes target everyone from President Biden to Taylor Swift, the problem of misinformation is more urgent than ever before.
False information produced and spread by AI, however, does not just threaten our economy and politics. It presents a fundamental threat to national security.
Although a nuclear confrontation based on fake intelligence may seem unlikely, the stakes during crises are high and timelines are short, creating situations where fake data could well tilt the balance toward nuclear war.
The evolution of nuclear systems has led to further ambiguity in crises and shorter timeframes for verifying intelligence. An intercontinental ballistic missile from Russia could reach the U.S. within 25 minutes. A submarine-launched ballistic missile could arrive even sooner. Many modern missiles carry ambiguous payloads, making it unclear whether they are nuclear-tipped. AI tools for verifying the authenticity of content are not sufficiently reliable, making this ambiguity difficult to resolve in a short window.




Leave a comment