Miles’s Substack
Subscribe
Sign in
Home
Archive
About
Latest
Top
Standards, Incentives, and Evidence: The Frontier AI Governance Triad
One way of thinking about the ingredients we need to make frontier AI systems safe and secure.
Jun 19
•
Miles Brundage
13
Share this post
Miles’s Substack
Standards, Incentives, and Evidence: The Frontier AI Governance Triad
Copy link
Facebook
Email
Notes
More
Iran and the opportunities and limits of an “IAEA for AI”
The situation in Iran shows why an IAEA for AI would not be sufficient to prevent AI-related international crises.
Jun 18
•
Miles Brundage
11
Share this post
Miles’s Substack
Iran and the opportunities and limits of an “IAEA for AI”
Copy link
Facebook
Email
Notes
More
AI is a Liquid, Not a Solid
The variable amount of AI that you have, and AI's propensity to spread all over the place, make it a lot like a liquid.
Jun 7
•
Miles Brundage
29
Share this post
Miles’s Substack
AI is a Liquid, Not a Solid
Copy link
Facebook
Email
Notes
More
May 2025
Why We Need to Think Bigger in AI Policy (Literally)
More attention is needed to the overall safety and security practices of AI companies, rather than just the properties of individual AI systems.
May 18
•
Miles Brundage
37
Share this post
Miles’s Substack
Why We Need to Think Bigger in AI Policy (Literally)
Copy link
Facebook
Email
Notes
More
April 2025
New Article with Grace Werner: America First Meets Safety First
Competition and cooperation need to be carefully balanced given the impossibility of an AGI monopoly.
Apr 23
•
Miles Brundage
6
Share this post
Miles’s Substack
New Article with Grace Werner: America First Meets Safety First
Copy link
Facebook
Email
Notes
More
March 2025
Dialogue with Dean Ball on his Private Governance Proposal
Dean and I discuss the finer points of private governance, and I do something kind of crazy with Google Docs
Mar 22
•
Miles Brundage
12
Share this post
Miles’s Substack
Dialogue with Dean Ball on his Private Governance Proposal
Copy link
Facebook
Email
Notes
More
Why I'm Not a Security Doomer
Preventing theft of AI systems is hard. But sometimes it's worth it, and protecting model weights specifically is a good target.
Mar 11
•
Miles Brundage
26
Share this post
Miles’s Substack
Why I'm Not a Security Doomer
Copy link
Facebook
Email
Notes
More
February 2025
Some Very Important Things (That I Won't Be Working On This Year)
I'd love to see more people working on these 5 topics.
Feb 15
•
Miles Brundage
47
Share this post
Miles’s Substack
Some Very Important Things (That I Won't Be Working On This Year)
Copy link
Facebook
Email
Notes
More
Emergency blog post - reflections on VP Vance's speech at the AI Action Summit
Brief thoughts from Paris Aéroport on the more surprising parts of his speech.
Feb 11
•
Miles Brundage
46
Share this post
Miles’s Substack
Emergency blog post - reflections on VP Vance's speech at the AI Action Summit
Copy link
Facebook
Email
Notes
More
The Real Lesson of DeepSeek's R1
R1 is just the latest data point indicating that superhuman AI will be easier and cheaper to build than most people think, and won't be monopolized.
Feb 2
•
Miles Brundage
46
Share this post
Miles’s Substack
The Real Lesson of DeepSeek's R1
Copy link
Facebook
Email
Notes
More
January 2025
DeepSeek and the Future of AI Competition with Miles Brundage
terrible takes can't stand uncorrected
Published on ChinaTalk
•
Jan 25
Feedback on the Second Draft of the General-Purpose AI Code of Practice
Compliance with the most intensive requirements is (mostly) feasible for big companies, but that doesn't mean all is well.
Jan 15
•
Miles Brundage
10
Share this post
Miles’s Substack
Feedback on the Second Draft of the General-Purpose AI Code of Practice
Copy link
Facebook
Email
Notes
More
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts